venue
stringclasses 2
values | paper_content
stringlengths 7.54k
83.7k
| prompt
stringlengths 161
2.5k
| format
stringclasses 5
values | review
stringlengths 293
9.84k
|
---|---|---|---|---|
ICLR | Title
Scaling Pareto-Efficient Decision Making via Offline Multi-Objective RL
Abstract
The goal of multi-objective reinforcement learning (MORL) is to learn policies that simultaneously optimize multiple competing objectives. In practice, an agent’s preferences over the objectives may not be known apriori, and hence, we require policies that can generalize to arbitrary preferences at test time. In this work, we propose a new data-driven setup for offline MORL, where we wish to learn a preference-agnostic policy agent using only a finite dataset of offline demonstrations of other agents and their preferences. The key contributions of this work are two-fold. First, we introduce D4MORL, (D)datasets for MORL that are specifically designed for offline settings. It contains 1.8 million annotated demonstrations obtained by rolling out reference policies that optimize for randomly sampled preferences on 6 MuJoCo environments with 2-3 objectives each. Second, we propose Pareto-Efficient Decision Agents (PEDA), a family of offline MORL algorithms that builds and extends return-conditioned offline methods including Decision Transformers (Chen et al., 2021) and RvS (Emmons et al., 2021) via a novel preference-and-return conditioned policy. Empirically, we show that PEDA closely approximates the behavioral policy on the D4MORL benchmark and provides an excellent approximation of the Pareto-front with appropriate conditioning, as measured by the hypervolume and sparsity metrics.
1 INTRODUCTION
We are interested in learning agents for multi-objective reinforcement learning (MORL) that optimize for one or more competing objectives. This setting is commonly observed in many real-world scenarios. For instance, an autonomous driving car might trade off high speed and energy savings depending on the user’s preferences. If the user has a relatively high preference for speed, the agent will move fast regardless of power usage; on the other hand, if the user tries to save energy, the agent will keep a more steady speed. One key challenge with MORL is that different users might have different preferences on the objectives and systematically exploring policies for each preference might be expensive, or even impossible. In the online setting, prior work considers several approximations based on scalarizing the vector-valued rewards of different objectives based on a single preference (Lin, 2005), learning an ensemble of policies based on enumerating preferences (Mossalam et al., 2016, Xu et al., 2020), or extensions of single-objective algorithms such as Q-learning to vectorized value functions (Yang et al., 2019).
We introduce the setting of offline multi-objective reinforcement learning for high-dimensional state and action spaces, where our goal is to train an MORL policy agent using an offline dataset of demonstrations from multiple agents with known preferences. Similar to the single-task setting, offline MORL can utilize auxiliary logged datasets to minimize interactions, thus improving data efficiency and minimizing interactions when deploying agents in high-risk settings. In addition to its practical utility, offline RL (Levine et al., 2020) has enjoyed major successes in the last few years (Kumar et al., 2020, Kostrikov et al., 2021, Chen et al., 2021) on challenging high-dimensional environments for continuous control and game-playing. Our contributions in this work are two-fold in introducing benchmarking datasets and a new family of MORL, as described below.
We introduce Datasets for Multi-Objective Reinforcement Learning (D4MORL), a collection of 1.8 million trajectories on 6 multi-objective MuJoCo environments (Xu et al., 2020). Here, 5 environ-
ments consist of 2 objectives and 1 environment consists of 3 objectives. For each environment in D4MORL, we collect demonstrations from 2 pretrained behavioral agents: expert and amateur, where the relative expertise is defined in terms of the Pareto-efficiency of the agents and measured empirically via their hypervolumes. Furthermore, we also include 3 kinds of preference distributions with varying entropies to expose additional data-centric aspects for downstream benchmarking. Lack of MORL datasets and large-scale benchmarking has been a major challenge for basic research (Hayes et al., 2022), and we hope that D4MORL can aid future research in the field.
Next, we propose Pareto-Efficient Decision Agents (PEDA), a family of offline MORL algorithms that extends return-conditioned methods including Decision Transformer (DT) (Chen et al., 2021) and RvS (Emmons et al., 2021) to the multi-objective setting. These methods learn a returnconditioned policy via a supervised loss on the predicted actions. In recent work, these methods have successfully scaled to agents that demonstrate broad capabilities in multi-task settings (Lee et al., 2022 Reed et al., 2022). For MORL, we introduce a novel preference and return conditioned policy network and train it via a supervised learning loss. At test time, naively conditioning on the default preferences and maximum possible returns leads to out-of-distribution behavior for the model, as neither has it seen maximum returns for all objectives in the training data nor is it possible to simultaneously maximize all objectives under competition. We address this issue by learning to map preferences to appropriate returns and hence, enabling predictable generalization at test-time.
Empirically, we find PEDA performs exceedingly well on D4MORL and closely approximates the reference Pareto-frontier of the behavioral policy used for data generation. In the multi-objective HalfCheetah environment, compared with an average upper bound on the hypervolume of 5.79ˆ106 achieved by the behavioral policy, PEDA achieves an average hypervolume of 5.77 ˆ 106 on the Expert and 5.76 ˆ 106 on the Amateur datasets.
2 RELATED WORK
Multi-Objective Reinforcement Learning Predominant works in MORL focus on the online setting where the goal is to train agents that can generalize to arbitrary preferences. This can be achieved by training a single preference-conditioned policy (Yang et al., 2019; Parisi et al., 2016), or an ensemble of single-objective policies for a finite set of preferences (Mossalam et al., 2016; Xu et al., 2020; Zhang & Li, 2007). Many of these algorithms consider vectorized variants of standard algorithms such as Q-learning (Mossalam et al., 2016; Yang et al., 2019), often augmented with strategies to guide the policy ensemble towards the Pareto front using evolutionary or incrementally updated algorithms (Xu et al., 2020; Zhang & Li, 2007; Mossalam et al., 2016; Roijers et al., 2014; Huang et al., 2022). Other approaches have also been studied, such as framing MORL as a meta-learning problem (Chen et al., 2019), learning the action distribution for each objective (Abdolmaleki et al., 2020), and learning the relationship between objectives (Zhan & Cao, 2019) among others. In contrast to these online MORL works, our focus is on learning a single policy that works for all preferences using only offline datasets.
There are also a few works that study decision-making with multiple objectives in the offline setting and sidestep any interaction with the environments. Wu et al., 2021 propose a provably efficient offline MORL algorithm for tabular MDPs based on dual gradient ascent. Thomas et al., 2021 study learning of safe policies by extending the approach of Laroche et al., 2019 to the offline MORL setting. Their proposed algorithm assumes knowledge of the behavioral policy used to collect the offline data and is demonstrated primarily on tabular MDPs with finite state and action spaces. In contrast, we are interested in developing dataset benchmarks and algorithms for scalable offline policy optimization in high-dimensional MDPs with continuous states and actions.
Multi-Task Reinforcement Learning MORL is also closely related to multi-task reinforcement learning, where every task can be interpreted as a distinct objective. There is an extensive body of work in learning multi-task policies both in the online and offline setups (Wilson et al., 2007; Lazaric & Ghavamzadeh, 2010; Teh et al., 2017) inter alia. However, the key difference is that typical MTRL benchmarks and algorithms do not consider solving multiple tasks that involve inherent trade-offs. Consequently, there is no notion of Pareto efficiency and an agent can simultaneously excel in all the tasks without accounting for user preferences.
Reinforcement Learning Via Supervised Learning A body of recent works have formulated offline reinforcement learning as an autoregressive sequence modeling problem using Decision Transformers (DT) or Trajectory Transformers ( Chen et al., 2021, Janner et al., 2021) The key idea in DT is to learn a transformer-based policy that conditions on the past history and a dynamic estimate of the returns (a.k.a. returns-to-go). Follow-up works consider online learning (Zheng et al., 2022) as well as simpler variants that rely only on multi-layer perceptrons (Emmons et al., 2021). Such agents are generally more stable and robust to optimize due to the simplicity of loss function and easier to scale to more complex settings such as environments with high-dimensional actions or states, as shown in recent works in multi-task RL (Lee et al., 2022; Reed et al., 2022).
3 PRELIMINARIES
Setup and Notation. We operate in the general framework of a multi-objective Markov decision process (MOMDP) with linear preferences (Wakuta, 1995). An MOMDP is represented by the tuple xS,A,P,R,Ω, f, γy. At each timestep t, the agent with a current state st P S takes an action at P A to transition into a new state st`1 with probability Ppst`1|st,atq and observes a reward vector rt “ Rpst,atq P Rn. Here, n is the number of objectives. The vector-valued return R P Rn of an agent is given by the discounted sum of reward vectors over a time horizon, R “ ř
t γ trt. We
also assume that there exists a linear utility function f and a space of preferences Ω that can map the reward vector rt and a preference vector ω P Ω to a scalar reward rt, i.e., rt “ fprt,ωq “ ω⊺rt. The expected vector return of a policy π is given an Gπ “ rGπ1 , Gπ2 , . . . , Gπns⊺ where the expected return of the ith objective is given as Gπi “ Eat`1„πp¨|st,ωqr ř
t Rpst,atqis for some predefined time horizon and preference vector ω. The goal is to train a multi-objective policy πpa|s,ωq such that the expected scalarized return ω⊺ Gπ “ Erω⊺ ř
t Rpst,atqs is maximized.
Pareto Optimality. In MORL, one cannot optimize all objectives simultaneously, so policies are evaluated based on the Pareto set of their vector-valued expected returns. Consider a preference-conditioned policy πpa|s,ωq that is evaluated for m distinct preferences ω1, . . . ,ωm, and let the resulting policy set be represented as tπpup“1,...,m, where πp “ πpa|s,ω “ ωpq, and Gπp is the corresponding unweighted expected return. We say the solution Gπp is dominated by Gπq when there is no objective for which πq is worse than πp, i.e., G πp i ă G πq i for @i P r1, 2, . . . , ns. If a solution is not dominated, it is part of the Pareto set denoted as P . The curve traced by the solutions in a Pareto set is also known as the Pareto front. In MORL, our goal is to define a policy such that its empirical Pareto set is a good approximation of the true Pareto front. While we do not know the true Pareto front for many problems, we can define metrics for relative comparisons between different algorithms. Specifically, we evaluate a Pareto set P based on two metrics, hypervolume and sparsity that we describe next.
Definition 1 (Hypervolume). Hypervolume HpP q measures the space or volume enclosed by the solutions in the Pareto set P :
HpP q “ ż
Rm 1HpP qpzq dz,
where HpP q “ tz P Z|Di : 1 ď i ď |P |, r ĺ z ĺ P piqu. P piq is the ith solution in P , ĺ is the dominance relation operator, and 1HpP qpzq equals 1 if z P HpP q and 0 otherwise. Higher hypervolumes are better. Definition 2 (Sparsity). Sparsity SpP q measures the density of the Pareto front covered by a Pareto set P :
SpP q “ 1|P | ´ 1
n ÿ
i“1
|P |´1 ÿ
k“1 pP̃ipkq ´ P̃ipk ` 1qq2,
where P̃i represents a list sorted as per the values of the ith objective in P and P̃ipkq is the kth value in the sorted list. Lower sparsity is better.
See Figure 1 for an illustration and Appendix F for discussion on other possible metrics.
4 D4MORL: DATASETS FOR OFFLINE MULTI-OBJECTIVE REINFORCEMENT LEARNING
In offline RL, the goal of an RL agent is to learn the optimal policy using a fixed dataset without any interactions with the environment (Levine et al., 2020). This perspective brings RL closer to supervised learning, where the presence of large-scale datasets has been foundational for further progress in the field. Many such data benchmarks exist for offline RL as well; a notable one is the D4RL (Fu et al., 2020) benchmark for continuous control which has led to the development of several state-of-the-art offline RL algorithms (Kostrikov et al., 2021; Kumar et al., 2020; Chen et al., 2021) that can scale favorably even in high dimensions. To the best of our knowledge, there are no such existing benchmarks for offline MORL. Even for the online setting, most works in MORL conduct evaluations on toy MDPs (e.g., gridworld) with a few exceptions that include continuous control, e.g., Chen et al. (2019); Xu et al. (2020). This calls for a much-needed push towards more challenging benchmarks for reliable evaluation of MORL, especially in the offline setting.
We introduce Datasets for Multi-Objective Reinforcement Learning (D4MORL), a large-scale benchmark for offline MORL. Our benchmark consists of offline trajectories from 6 multiobjective MuJoCo environments including 5 environments with 2 objectives each (MO-Ant, MOHalfCheetah, MO-Hopper, MO-Swimmer, MO-Walker2d), and one environment with three objectives (MO-Hopper-3obj). The objectives are conflicting for each environment; for instance, the two objectives in MO-Hopper correspond to jumping and running; in MO-HalfCheetah, MO-Swimmer, and MO-Walker2d, they correspond to the speed and energy savings of the agent. See Appendix A for more details on the semantics of the target objectives for each environment. These environments were first introduced in Xu et al. (2020) for online MORL, and as such, we use their pretrained ensemble policies as building blocks for defining new behavioral policies for dataset collection, which we discuss next.
4.1 TRAJECTORY SAMPLING
The quality of the behavioral policy used for sampling trajectories in the offline dataset is a key factor for benchmarking downstream offline RL algorithms. In existing benchmarks for single-objective RL such as D4RL (Fu et al., 2020), the quality of a behavioral policy can be ascertained and varied based on its closeness to a single expert policy, as measured by its scalar-valued returns. For a MOMDP, we do not have the notion of a scalar return and hence, a reference expert policy (or set of policies) should reflect the optimal returns for all possible preferences in the preference space.
We use Prediction-Guided Multi-Objective Reinforcement Learning (PGMORL), a state-of-the-art MORL algorithm for defining reference expert policies. PGMORL (Xu et al., 2020) uses evolutionary algorithms to train an ensemble of policies to approximate the Pareto set. Each reference policy in the ensemble is associated with a unique preference; as for any new preference, it is mapped to the closest preference in the reference set. The number of policies in the ensemble can vary significantly; for instance, we have roughly 70 reference policies for MO-Antand 2445 policies for harder environments such as MO-Hopper-3obj. Given a desired preference, we define two sets of behavioral policies:
1. Expert Dataset: We find the best reference policy in the policy ensemble, and always follow the action taken by the selected reference policy.
2. Amateur Dataset: As before, we first find the best reference policy in the policy ensemble. With a fixed probability p, we randomly perturb the actions of the reference policies. Otherwise, with probability 1 ´ p, we take the same action as the reference policy. In D4MORL, we set p “ 0.65.
Further details are described in Appendix C. In Figure 2, we show the returns of the trajectories rolled out from the expert and amateur policies for the 2 objective environments evaluated for a uniform sampling of preferences. We can see that the expert trajectories typically dominate the amateur trajectories, as desired. For the amateur trajectories, we see more diversity in the empirical returns for both objectives under consideration. The return patterns for the amateur trajectories vary across different environments providing a diverse suite of datasets in our benchmark.
4.2 PREFERENCE SAMPLING
The coverage of any offline dataset is an important factor in dictating the performance of downstream offline RL algorithms (Levine et al., 2020). For MORL, the coverage depends on both the behavioral MORL policy as well as the distribution of preferences over which this policy is evaluated. We use the following protocols for sampling from the preference space Ω. First, we restrict our samples to lie within a physically plausible preference space Ω˚ Ď Ω covered by the behavioral policy πβ . For instance, MO-Hopper has two objectives: jumping and running. Since the agent can never gain running rewards without leaving the floor. Thus, the preference of 100% running and 0% jumping is not achievable and excluded from our preference sampling distribution.
Second, we are primarily interested in offline trajectories that emphasize competition between multiple objectives rather than focusing on a singular objective. To enforce this criterion, we define 3 sampling distributions concentrated around the centroid of the preference simplex. The largest spread distribution samples uniformly from Ω˚ and is denoted as High-Entropy (High-H). Next, we have a Medium-Entropy (Med-H) distribution specified via samples of Dirichlet distributions with large values of their concentration hyperparameters (aka α). Finally, we have a Low-Entropy (Low-H) distribution that is again specified via samples of Dirichlet distributions but with low values of their concentration hyperparameters. We illustrate the samples for each of the preference distributions along with their empirical entropies in Figure 3. Further details on the sampling distributions are deferred to Appendix B. By ensuring different levels of coverage, we can test the generalizability of an MORL policy to preferences unseen during training. In general, we expect Low-H to be the hardest of the three distributions due to its restricted coverage, followed by Med-H and High-H.
Overall Data Generation Pipeline. The pseudocode for generating the dataset is described in Algorithm 1. Given a preference distribution, we first sample a preference ω and query the closest behavioral policy in either the amateur/expert ensemble matching ω. We roll out this policy for T time steps (or until the end of an episode if sooner) and record the state, action, and reward information. Each trajectory in our dataset is represented as:
τ “ă ω, s1,a1, r1, . . . , sT ,aT , rT ą
Algorithm 1 Data Collection in D4MORL procedure COLLECT(prefDist, nTraj, env, pretrainedAgents, T)
agents = pretrainedAgents prefs = prefDist(nTraj) all trajs = [] for ω in prefs do
agent = closestAgent(agents, ω) s = env.reset() done = False τ = [ω] t = 0 while (NOT done) AND (t ă T) do
a = agent.get action(s) s1, done, r = env.step(a) append s, a, s1, r to τ s = s1 t = t + 1
append τ to all trajs return all trajs
For every environment in D4MORL, we collect 50K trajectories of length T “ 500 for both expert and amateur trajectory distributions under each of the 3 preference distributions. Overall, this results in a total of 1.8M trajectories over all 6 environments, which corresponds to roughly 867M time steps. We refer the reader to Table 5 in Appendix B for additional statistics on the dataset.
5 PARETO-EFFICIENT DECISION AGENTS (PEDA)
In this section, we propose Pareto-Efficient Decision Agents (PEDA), a new family of offline multi-objective RL agents. PEDA aims to achieve Pareto-efficiency by extending Decision Transformers (Chen et al., 2021) into multi-objective setting. We first introduce the architecture of Decision Transformers (DT) and its variant, Reinforcement Learning Via Supervised Learning (RvS), followed by our modifications extending them to the multi-objective setting.
DT casts offline RL as a conditional sequence modeling problem that predicts the next action by conditioning a transformer on past states, actions, and desired returns. The desired returns are defined as returns-to-go (RTG) gt “ řT t1“t rt1 , the future returns that this action is intended to achieve. Therefore, the trajectory is represented by τ “ă s1,a1, g1, . . . , sT ,aT , gT ą. In practice, we use a causally masked transformer architecture such as GPT (Radford et al., 2019) to process this sequence and predict the actions by observing the past K timesteps consisting of 3K tokens. DT and its variants have been shown to be more stable and robust to optimize due to the simplicity of loss function; easier to scale to more complex settings such as environments with high-dimensional actions or states, and agents with broad capabilities such as multitask settings (Lee et al., 2022). Hence, we adopt Decision Transformers (Chen et al., 2021) as the representative base algorithm on which we build our work.
In follow-up work, Emmons et al. (2021) extend DT and shows that even multi-layer perceptrons conditioned on the average returns-to-go can achieve similar performance without the use of transformers. They call their model Reinforcement Learning Via Supervised Learning (RvS). However, RvS is generally not very stable when conditioned on very large returns, unlike DT.
5.1 MULTI-OBJECTIVE REINFORCEMENT LEARNING VIA SUPERVISED LEARNING
In PEDA, our goal is to train a single preference-conditioned agent for offline MORL. By including preference conditioning, we enable the policy to be trained on arbitrary offline data, including trajectories collected from behavioral policies that are associated with alternate preferences. To parameterize our policy agents, we extend the DT and RvS architectures to include preference tokens and vector-valued returns. We refer to such preference-conditioned extensions of these architectures as MODT(P) and MORVS(P) respectively, which we describe next.
Preference Conditioning. Naively, we can easily incorporate the preference ω into DT by adding this token for each timestep and feeding it a separate embedding layer. However, empirically we find that such a model design tends to ignore ω and the correlation between the preferences and predicted actions is weak. Therefore, we propose to concatenate ω to other tokens before any layers in MODT(P). Concretely, we define s˚ “ s À ω, a˚ “ a À ω, and g˚ “ g À ω where À denotes the concatenation operator. Hence, triples of s˚, a˚, g˚ form the new trajectory. As for MORVS(P), we concatenate the preference with the states and the average RTGs by default and the network interprets everything as one single input.
Multi-Objective Returns-to-Go. Similar to RTG for the single objective case, we can define vector-valued RTG as gt “ řT t1“t rt1 Given a preference vector ω, we can scalarize the total returns-to-go as ĝt “ ωTgt. In principle, the scalarized RTG ĝt can be recovered given the preference vector ω and the vector-valued RTG gt. However, empirically we find that directly feeding MODT/MORVS with the preference-weighted RTG vector gt d ω is slightly preferable for stable training, where d denotes the elementwise product operator. Another unique challenge in the MORL setting concerns the scale of different objectives. Since different objectives can signify different physical quantities (e.g., energy and speed), the choice of scaling can influence policy optimization. We adopt a simple normalization scheme, where the returns for each objective are normalized by subtracting the minimum observed value for that objective and dividing it by the range of values (max-min). Note that the maximum and minimum are computed based on the offline dataset and hence, they are not necessarily the true min/max objective values. Post this normalization, the values for every objective in the trajectory are on the same scale between 0 and 1. For evaluating the hypervolume and sparsity, we use the unnormalized values so that we can make comparisons across different datasets that may have different min/max boundaries.
Training. We follow a simple supervised training procedure where we train the policies on randomly sampled mini-batches with MSE loss (for continuous actions). In MODT and MODT(P), the input states, actions, and returns-to-go (with concatenated preferences) are treated as tokens and embedded through one layer of MLP. We apply a layer of MLP and Tanh on the last hidden state of GPT-2 transformer to predict next action. In MORVS and MORVS(P), we use only information from the current timestep and MLP layers to predict the next action.
6 EXPERIMENTS
In this section, we evaluate the performance of PEDA on D4MORL benchmark. First, we investigate the benefits of preference conditioning by evaluating on decision transformers (DT) and RvS (MORVS) where no preference information is available and we scalarize multi-objective vector returns into weighted sums. We denote our methods with preference conditioning as MODT(P) and MORVS(P). Second, we compare our methods with classic imitation learning and temporal difference learning algorithms with preference conditioning.
Imitation learning. Imitation learning simply uses supervised loss to train a mapping from states (w/ or w/o concatenating preferences) to actions. We use behavioral cloning (BC) here and train multi-layer MLPs as models named BC (w/o preference) and BC(P) (w/ preference).
Temporal difference learning. Conservative Q-Learning (CQL) (Kumar et al., 2020) is the stateof-the-art standard offline RL method, which learns a conservative Q-function f : S ˆ A Ñ R through neural networks. We modify the network architecture such that it also takes preference vectors as inputs to learn a preference-conditioned Q-function f˚ : S ˆ A ˆ Ω Ñ R. We denote this method as CQL(P).
6.1 MULTI-OBJECTIVE OFFLINE BENCHMARK
Hypervolume. We compare hypervolume of our methods with all baselines on expert datasets in Table 1 as well as amateur dataset in Table 2. For the two-objective environments, we evaluate the
models on 501 equally spaced preference points in the range [0, 1]; on the three-objective environment MO-Hopper-3obj, models are evaluated on 325 equally spaced points. Each point is evaluated 5 times with random environment re-initialization, and the median value is recorded. Finally, all the results are based on 3 random seeds and we report the mean performance along with the standard error. In Table 1 and Table 2, we can see that MODT(P) and MORVS(P) outperform other baselines and has a relatively very low standard error. Also, PEDA variants including MODT(P) and MORVS(P) approaches the behavioral policy upper-bound.
Sparsity. We also evaluate sparsity performance. Since sparsity comparison is only meaningful between models that are sensitive to preference and have a relatively similar hypervolume performance, we only show results for models that concatenate preference. Overall, MORVS(P) has the lowest sparsity in most environments, while at the same time featuring an outstanding hypervolume.
6.2 ABLATION STUDY
Pareto front approximation. We ablate how well the MODT(P) and MORVS(P) can approximate the Pareto front through conditioning on different preference points. We show the results in Figure 4, where we can see that the models can approximate the Pareto front while having some dominated points colored in pink mostly in the MO-Hopper and MO-Walker2d environments. The results are based on the average of 3 seeds, and the full plot can be found in Appendix G.
Return distribution. We ablate how well MODT(P) and MORVS(P) follow their given target return, based on a normalized and weighted value. We present the results in Figure 5 for MORVS(P) under High-H-Expert datasets and refer to Appendix H for full settings. Here, we see that the models follow the oracle line nicely when conditioned on the target within the dataset distribution, and generalize to targets outside of the dataset distribution as well.
7 CONCLUSION
We proposed a new problem setup for offline Multi-Objective Reinforcement Learning to scale Pareto-Efficient decision-making using offline datasets. To characterize progress, we introduced D4MORL, a dataset benchmark consisting of offline datasets generated from behavioral policies of different fidelities (expert/amateur) and rolled out under preference distributions with varying entropies (high/medium/low). Then, we propose PEDA, a family of offline MORL policy optimization algorithms based on decision transformers. To our knowledge, the PEDA variants are the first offline MORL policies that support continuous action and preference spaces. We showed that by concatenating and embedding preferences together with other inputs, our policies can effectively approximate the Pareto front of the underlying behavioral policy as measured by the hypervolume and sparsity metrics. Our proposed family includes MLP and transformer-based variants, viz. the MORVS(P) and MODT(P), with MORVS(P) performing the best overall. In some scenarios, the learned policies can also generalize to higher target rewards that exceed the data distribution.
REPRODUCIBILITY STATEMENT
Our code is available at: https://github.com/baitingzbt/PEDA.
ACKNOWLEDGEMENTS
AG’s research is supported by a Meta Research Award and a Cisco grant.
A ENVIRONMENT DESCRIPTION
All environments are the same as in Xu et al., 2020, except for when resetting the environment, each parameter is uniformly sampled from the rx´ 10´3, x` 10´3s with x being the default value. Except we always reset height as 1.25 for MO-Hopper and MO-Hopper-3objsince this parameter directly relates to the reward function. All environments have a max episode length of 500 steps per trajectory, but the agent may also die before reaching the maximum length.
A.1 MO-ANT
The two objectives in MO-Ant are achieved distance in x and y axes respectively, denoted as r “ rrvxt , r vy t s⊺.
Consider the position of the agent is represented as pxt, ytq at time t and takes the action at. The agent has a fixed survival reward rs “ 1.0, dt “ 0.05, and an action cost of ra “ 12 ř k a 2 k. The rewards are calculated as:
rvxt “ pxt ´ xt´1q { dt ` rs ´ ra rvyt “ pyt ´ yt´1q { dt ` rs ´ ra (1)
A.2 MO-HALFCHEETAH
The two objectives in MO-HalfCheetah are running speed, and energy saving, denoted as r “ rrvt , ret s⊺. Consider the position of the agent is represented as pxt, ytq at time t and takes the action at. The agent has a fixed survival reward rs “ 1.0, fixed dt “ 0.05, and an action cost of ra “ ř
k a 2 k. The
rewards are calculated as:
rvt “ mint4.0, pxt ´ xt´1q { dtu ` rs ret “ 4.0 ´ ra ` rs (2)
A.3 MO-HOPPER
The two objectives in MO-Hopper are running and jumping, denoted as r “ rrr, rjs⊺. Consider the position of the agent is represented as pxt, htq at time t and takes the action at. The agent has a fixed survival reward rs “ 1.0, a fixed initial height as hinit “ 1.25, a fixed dt “ 0.01, and an action cost of ra “ 2 ˆ 10´4 ř
k a 2 k. The rewards are calculated as:
rrt “ 1.5 ˆ pxt ´ xt´1q { dt ` rs ´ ra rjt “ 12 ˆ pht ´ hinitq { dt ` rs ´ ra (3)
A.4 MO-HOPPER-3OBJ
The physical dynamics are the same in MO-Hopper and MO-Hopper-3obj, while this environment has 3 objectives: running, jumping, and energy saving. The rewards are denoted as r “ rrr, rj , res⊺. Consider the position of the agent is represented as pxt, htq at time t and takes the action at. The agent has a fixed survival reward rs “ 1.0, a fixed initial height as hinit “ 1.25, a fixed dt “ 0.01, and an action cost of ra “ ř
k a 2 k. The rewards are calculated as:
rrt “ 1.5 ˆ pxt ´ xt´1q { dt ` rs
rjt “ 12 ˆ pht ´ hinitq { dt ` rs ret “ 4.0 ´ ra ` rs (4)
A.5 MO-SWIMMER
The two objectives in MO-Swimmer are speed and energy saving, denoted as r “ rrv, res⊺. Consider the position of the agent is represented as pxt, ytq at time t and takes the action at. The agent has a fixed dt “ 0.05, and an action cost of ra “ ř
k a 2 k. The rewards are calculated as:
rvt “ pxt ´ xt´1q { dt ret “ 0.3 ´ 0.15 ˆ ra
(5)
A.6 MO-WALKER2D
The objectives in MO-Walker2d are speed and energy saving, denoted as r “ rrv, res⊺. Consider the position of the agent is represented as pxt, ytq at time t and takes the action at. The agent has a fixed survival reward rs “ 1.0, a fixed dt “ 0.008, and an action cost of ra “ ř
k a 2 k.
The rewards are calculated as:
rvt “ pxt ´ xt´1q { dt ` rs ret “ 4.0 ´ ra ` rs (6)
To uniformly sample the High-H data from the entire preference space, the problem is equivalent to sampling from a n-dimensional simplex, where n is the number of objectives. The resulting sampling is:
ωhigh „ ||fexpp ¨ , λ “ 1q||1 (7)
We take the 1-norm following the exponential distribution to make sure each preference add up to 1. When Ω˚ ‰ Ω, we perform rejection sampling to restrict the range. To sample the Med-H and Low-H data, we first sample α from a non-negative uniform distribution, then sample the corresponding Dirichlet preference. Here, we sample a different alpha to make sure the center of the Dirichlet changes and thus allows more variation.
ωmed „ fDirichletpαq ; where α „ Unifp0, 106q ωlow „ fDirichletpαq ; where α „ Unifp1{3 ˆ 106, 2{3 ˆ 106q
(8)
For sampling from behavioral policy consists of a group of single-objective policies πβ “ tπ1, . . . , πBu with B being the total number of candidate policies, we recommend first find the expected unweighted raw rewards Gπ1 , . . . ,GπB . Then, find the estimated ω̂π1 , . . . , ω̂πB by letting ω̂πbi “ G πb i
řn j“1 G πb j
, which represents the estimated preference on ith objective of bth candidate
policy. For a sampled preference ω „ Ω˚, use the policy that provides the smallest euclidean distance dpω, ω̂πbq. Empirically, this means picking the candidate policy that has the expected reward ratio closest to ω.
B DATASET DETAILS
To uniformly sample the High-H data from the entire preference space, we can equivalently sample from a n-dimensional simplex, where n is the number of objectives. The resulting sampling scheme is:
ωhigh „ ||fexpp ¨ , λ “ 1q||1 (9)
The 1-norm following the exponential distribution makes sure each preference vector have entries add up to 1. When Ω˚ ‰ Ω, we perform rejection sampling to restrict the range.
To sample the Med-H and Low-H data, we first sample α from a non-negative uniform distribution, then sample the corresponding Dirichlet preference. Here, we sample a different alpha to make sure the mode of the Dirichlet changes and thus allows more variation.
ωmed „ fDirichletpαq ; where α „ Unifp0, 106q ωlow „ fDirichletpαq ; where α „ Unifp1{3 ˆ 106, 2{3 ˆ 106q
(10)
Since our behavioral policy is consists of a group of single-objective policies πβ “ tπ1, . . . , πBu with B being the total number of candidate policies, we first find the expected unweighted raw rewards Gπ1 , . . . ,GπB . Then, we find the estimated preferences ω̂π1 , . . . , ω̂πB by letting ω̂πbi “
G πb i
řn j“1 G πb j on ith objective of bth candidate policy. For each sampled preference ω „ Ω˚ following (9) or (10), we sample a complete trajectory using the single-objective behavioral policy that provides the smallest euclidean distance min dpω, ω̂πbq. Empirically, this means picking the candidate policy that has the expected reward ratio closest to ω.
C EXPERT & AMATEUR DATASETS
In Expert collection, we sample trajectories using the fully-trained behavioral policy πβ . In this paper, we use PGMORL by Xu et al., 2020 as our behavioral policy πβ
aexpertt`1 “ πβpa|s “ st,ω “ ωtq (11)
In the Amateur collection, the policies has a 35% chance being stochastic on top of the expert collection. Actions has a chance being stochastic, during which it is scaled from the expert action, as following:
aamateurt`1 “ " aexpertt`1 35% aexpertt`1 ˆ Unifp0.35, 1.65q 65%
(12)
In the MO-Swimmer environment only, we let actions has a 35% chance to be a uniform random sample from the entire action space rather than being the same as expert to increase variance and achieve a performance similar to amateur. The resulting strategy for MO-Swimmer is:
aamateurt`1 “ " UnifpAq 35% aexpertt`1 ˆ Unifp0.35, 1.65q 65%
(13)
D FINDING APPROPRIATE MULTI-OBJECTIVE RTG
In Decision Transformer Chen et al. (2021) and Emmons et al. (2021), RTG denotes the future desired reward. In MORL, however, designing appropriate multi-objective RTG is necessary. On top
of discounting each objective’s desired reward separately, we empirically find that since some objectives are inherently conflicting, setting RTG high for one objective means we should accordingly lower RTG for other objectives (i.e. we shouldn’t use maximum RTG for both). In this way, our test-time RTG can follow closer to the training distribution.
In this paper, we use linear regression G “ fpωq to find corresponding RTG conditioned on the given preference. Figure 6 demonstrates the weighted RTG of the “running” objective as a function of its preference in MO-Hopper where the conflicting objectives are “running” and “jumping”. It is clear that RTG closely correlates with the conditioned preference for running and we should adjust the initial RTG during test-time accordingly.
Finally, we only use the learned linear regression model from Expert dataset. This is because regression models fitted on sub-optimal data can easily produce an RTG lower than optimal. In practice, we can easily achieve a similar result by training the regression model only on the best-performing trajectories for respective preferences. Other regression or clustering methods to find appropriate RTG can also work, and we leave it as future works especially when not assuming linearly weighted objectives.
E TRAINING DETAILS
In this section, we list our hyper-parameters and model details. In specific, we use the same hyperparameters for all algorithms, except for the learning rate scheduler and warm-up steps. In MODT family, inputs are first embedded by a 1-layer fully-connected network, and n layer represents the number of transformer blocks; in BC family, n layer represents the number of MLP layers to embed each input; in MORVS and MORVS(P), we leverage the same embedding strategy in Emmons et al. (2021). Additionally, we consider MORVS and MORVS(P) both have context length of 1 because they only use the current state to predict the next action, whereas MODT and BC use the past 20.
E.1 PARAMETERS
Hyperparameter MODT MORvS BC
Context Length - K 20 1 20 Batch Size 64
Hidden Size 512 Learning Rate 1e-4 Weight Decay 1e-3
Dropout 0.1 n layer 3
Optimizer AdamW Loss Function MSE LR Scheduler lambda None lambda
Warm-up Steps 10000 N/A 4000 Activation ReLU
E.2 TRAINING STEPS
Dataset Name MODT Steps RvS/BC Steps
MO-Ant 20K 200K MO-HalfCheetah 80K 200K
MO-Hopper 400K 200K MO-Hopper-3obj 400K 200K
MO-Swimmer 260K 200K MO-Walker2d 360K 200K
E.3 OTHER ATTEMPTED ARCHITECTURES FOR MODT AND MODT(P)
We tried the following MODT architectures in our preliminary experiments. We picked Case 4 eventually as it gave the best performance in our experiments.
1. Consider ω as an independent token of the transformer.
2. Train a separate embedding for ω, concatenate the embeddings to get fϕspsq À fϕω pωq, fϕapaq À fϕω pωq, and fϕg pgq À
fϕω pωq then pass into the transformer. 3. Add another MLP layer on top of Case 2 after concatenation, then pass output into the
transformer.
4. Concatenate ω to other tokens before any layers. This means we have s˚ “ s À
ω, a˚ “ a À ω, and g˚ “ g À ω.
F OTHER EVALUATION METRICS
Among a variety of metrics for MORL, we use Hypervolume (HV) and Sparsity (SP) to benchmark models for several reasons. First, many metrics such as the ϵ-metric require prior knowledge of the true Pareto Fronts, which are not available for our MuJoCo Environments. Second, we only assume linear reward function and cannot collect real-time user feedback, thus utility-based metrics such as expected utility metric (EUM) are not applicable. Finally, using the same metric as in the original behavioral policy paper facilitate algorithm comparisons.
G PARETO SET VISUALIZATIONS
We present the Pareto Set visualizations for all of our models trained under each High-H dataset in Figure 7. Each point in each subplot is based on the average result of 3 seeds. In 2 objective environments, we evaluate the model using 501 equally spaced preference points. In 3 objective environments, we use 351 equally spaced preference points instead. Since the environments are stochastically initialized, we evaluate 5 times at each preference point and take the mean value. This makes each point the average value of 15 runs. We here allow a small tolerance for coloring the dominated points.
If a preference point is within the achievable preference Ω˚ but the solution is dominated, we color it in red. Since our models are conditioned on continuous preference points and environments are initialized stochastically, we give a small tolerance (3%-8%) for points to be colored in blue. The hypervolume and sparsity metrics, on the other hand, are based on strictly undominated solutions without tolerance.
H MEDIUM & LOW ENTROPY DATASET TRAINING
We train on the Medium-Entropy and Low-Entropy datasets for the MO-HalfCheetah environment. Overall, models have a similar performance under Med-H and High-H datasets but suffer when only trained on Low-H. We present the results in Table 6, in which we illustrate that the Low-H dataset has a worse expert and amateur performance due to reduced variability on preference. However, MODT(P) and MORVS(P) are still able to get close to or exceed in hypervolume on all
datasets, which showcases the effectiveness of PEDA as an efficient MORL policy. Results are based on an average of 3 seeds with the standard error given.
I TRAINING WITH 1-DIM RTG
We attempted to train MODT and RvS with 1-dim return-to-go rather than a separate rtg for each objective. According to results on MO-HalfCheetah and the High-H datasets in 7, using multidimensional RTG enhances the performance of MODT(P), and are about the same for MORVS(P) when preference is concatenated to states. However, it reduces standard error significantly in both MODT(P) and MORVS(P). In the naive models when preferences are not concatenated to states, using a multi-dimensional RTG helps to achieve a much more competitive hypervolume. We thus believe multi-dimensional RTG conveys important preference information when the model doesn’t directly take preference as an input. Results are based on an average of 3 seeds with the standard error given. | 1. What is the focus and contribution of the paper regarding multi-objective reinforcement learning?
2. What are the strengths of the proposed approach, particularly in terms of the new dataset and decision transformer-like algorithm?
3. What are the weaknesses of the paper, especially regarding experiment comparisons and lacking baselines?
4. How do the two contributions of the paper, the dataset and the algorithm, relate to each other? Are they independent or interdependent?
5. Is the dataset general enough to be considered an independent contribution, or is it specific to the proposed method?
6. What differentiates this work from multi-objective inverse reinforcement learning?
7. Can the proposed algorithm work on other datasets beyond the one introduced in the paper? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This manuscript did two things. One is to propose a new dataset that collects agent trajectories from multiple independent agents with different preferences on the objectives. This dataset contains trajectories from both well-trained agents and semi-trained agents. HalfCheetah is a typical example where actions that are large in absolute value will consume more energy while the agent wants the cheetah to run as fast as possible. Two is to propose a decision transformer-like algorithm that handles an offline multi-objective dataset (which is possibly the one proposed). This algorithm is quite intuitive and the main idea is to condition its output on the preference of the newly tested task. Several experiments are presented.
Strengths And Weaknesses
Pro:
This paper introduces a new dataset that targets the multi-objective RL with offline data.
This paper introduces a decision transformer-like algorithm which is intuitive and works in experiments.
Cons:
The baselines in the experiments seem lacking as only two methods (BC/CQL) are compared with. Is any offline RL and inverse RL approach relevant?
Questions
One question is how the two contributions - namely the dataset and the algorithm - are entangled to each other. With the presentation of this paper one would expect the contributions to be independent, i.e. we expect the algorithm to work on other datasets as well. Is there any way to validate this? I understand a similar dataset might not be present in the community for now.
Following the last question, is the dataset general enough to be an independent contribution? From the generation of this dataset I could many variants in the process. Should the community use this dataset in the future or it's good only for this work.
What is the difference of this work and multi-objective inverse RL? I'm not seeing much difference despite the work follows the vein of offline RL mostly.
Clarity, Quality, Novelty And Reproducibility
The presentation is in high clarity and quality. See the previous part for novelty. I have no tell on its reproducibility. |
ICLR | Title
Scaling Pareto-Efficient Decision Making via Offline Multi-Objective RL
Abstract
The goal of multi-objective reinforcement learning (MORL) is to learn policies that simultaneously optimize multiple competing objectives. In practice, an agent’s preferences over the objectives may not be known apriori, and hence, we require policies that can generalize to arbitrary preferences at test time. In this work, we propose a new data-driven setup for offline MORL, where we wish to learn a preference-agnostic policy agent using only a finite dataset of offline demonstrations of other agents and their preferences. The key contributions of this work are two-fold. First, we introduce D4MORL, (D)datasets for MORL that are specifically designed for offline settings. It contains 1.8 million annotated demonstrations obtained by rolling out reference policies that optimize for randomly sampled preferences on 6 MuJoCo environments with 2-3 objectives each. Second, we propose Pareto-Efficient Decision Agents (PEDA), a family of offline MORL algorithms that builds and extends return-conditioned offline methods including Decision Transformers (Chen et al., 2021) and RvS (Emmons et al., 2021) via a novel preference-and-return conditioned policy. Empirically, we show that PEDA closely approximates the behavioral policy on the D4MORL benchmark and provides an excellent approximation of the Pareto-front with appropriate conditioning, as measured by the hypervolume and sparsity metrics.
1 INTRODUCTION
We are interested in learning agents for multi-objective reinforcement learning (MORL) that optimize for one or more competing objectives. This setting is commonly observed in many real-world scenarios. For instance, an autonomous driving car might trade off high speed and energy savings depending on the user’s preferences. If the user has a relatively high preference for speed, the agent will move fast regardless of power usage; on the other hand, if the user tries to save energy, the agent will keep a more steady speed. One key challenge with MORL is that different users might have different preferences on the objectives and systematically exploring policies for each preference might be expensive, or even impossible. In the online setting, prior work considers several approximations based on scalarizing the vector-valued rewards of different objectives based on a single preference (Lin, 2005), learning an ensemble of policies based on enumerating preferences (Mossalam et al., 2016, Xu et al., 2020), or extensions of single-objective algorithms such as Q-learning to vectorized value functions (Yang et al., 2019).
We introduce the setting of offline multi-objective reinforcement learning for high-dimensional state and action spaces, where our goal is to train an MORL policy agent using an offline dataset of demonstrations from multiple agents with known preferences. Similar to the single-task setting, offline MORL can utilize auxiliary logged datasets to minimize interactions, thus improving data efficiency and minimizing interactions when deploying agents in high-risk settings. In addition to its practical utility, offline RL (Levine et al., 2020) has enjoyed major successes in the last few years (Kumar et al., 2020, Kostrikov et al., 2021, Chen et al., 2021) on challenging high-dimensional environments for continuous control and game-playing. Our contributions in this work are two-fold in introducing benchmarking datasets and a new family of MORL, as described below.
We introduce Datasets for Multi-Objective Reinforcement Learning (D4MORL), a collection of 1.8 million trajectories on 6 multi-objective MuJoCo environments (Xu et al., 2020). Here, 5 environ-
ments consist of 2 objectives and 1 environment consists of 3 objectives. For each environment in D4MORL, we collect demonstrations from 2 pretrained behavioral agents: expert and amateur, where the relative expertise is defined in terms of the Pareto-efficiency of the agents and measured empirically via their hypervolumes. Furthermore, we also include 3 kinds of preference distributions with varying entropies to expose additional data-centric aspects for downstream benchmarking. Lack of MORL datasets and large-scale benchmarking has been a major challenge for basic research (Hayes et al., 2022), and we hope that D4MORL can aid future research in the field.
Next, we propose Pareto-Efficient Decision Agents (PEDA), a family of offline MORL algorithms that extends return-conditioned methods including Decision Transformer (DT) (Chen et al., 2021) and RvS (Emmons et al., 2021) to the multi-objective setting. These methods learn a returnconditioned policy via a supervised loss on the predicted actions. In recent work, these methods have successfully scaled to agents that demonstrate broad capabilities in multi-task settings (Lee et al., 2022 Reed et al., 2022). For MORL, we introduce a novel preference and return conditioned policy network and train it via a supervised learning loss. At test time, naively conditioning on the default preferences and maximum possible returns leads to out-of-distribution behavior for the model, as neither has it seen maximum returns for all objectives in the training data nor is it possible to simultaneously maximize all objectives under competition. We address this issue by learning to map preferences to appropriate returns and hence, enabling predictable generalization at test-time.
Empirically, we find PEDA performs exceedingly well on D4MORL and closely approximates the reference Pareto-frontier of the behavioral policy used for data generation. In the multi-objective HalfCheetah environment, compared with an average upper bound on the hypervolume of 5.79ˆ106 achieved by the behavioral policy, PEDA achieves an average hypervolume of 5.77 ˆ 106 on the Expert and 5.76 ˆ 106 on the Amateur datasets.
2 RELATED WORK
Multi-Objective Reinforcement Learning Predominant works in MORL focus on the online setting where the goal is to train agents that can generalize to arbitrary preferences. This can be achieved by training a single preference-conditioned policy (Yang et al., 2019; Parisi et al., 2016), or an ensemble of single-objective policies for a finite set of preferences (Mossalam et al., 2016; Xu et al., 2020; Zhang & Li, 2007). Many of these algorithms consider vectorized variants of standard algorithms such as Q-learning (Mossalam et al., 2016; Yang et al., 2019), often augmented with strategies to guide the policy ensemble towards the Pareto front using evolutionary or incrementally updated algorithms (Xu et al., 2020; Zhang & Li, 2007; Mossalam et al., 2016; Roijers et al., 2014; Huang et al., 2022). Other approaches have also been studied, such as framing MORL as a meta-learning problem (Chen et al., 2019), learning the action distribution for each objective (Abdolmaleki et al., 2020), and learning the relationship between objectives (Zhan & Cao, 2019) among others. In contrast to these online MORL works, our focus is on learning a single policy that works for all preferences using only offline datasets.
There are also a few works that study decision-making with multiple objectives in the offline setting and sidestep any interaction with the environments. Wu et al., 2021 propose a provably efficient offline MORL algorithm for tabular MDPs based on dual gradient ascent. Thomas et al., 2021 study learning of safe policies by extending the approach of Laroche et al., 2019 to the offline MORL setting. Their proposed algorithm assumes knowledge of the behavioral policy used to collect the offline data and is demonstrated primarily on tabular MDPs with finite state and action spaces. In contrast, we are interested in developing dataset benchmarks and algorithms for scalable offline policy optimization in high-dimensional MDPs with continuous states and actions.
Multi-Task Reinforcement Learning MORL is also closely related to multi-task reinforcement learning, where every task can be interpreted as a distinct objective. There is an extensive body of work in learning multi-task policies both in the online and offline setups (Wilson et al., 2007; Lazaric & Ghavamzadeh, 2010; Teh et al., 2017) inter alia. However, the key difference is that typical MTRL benchmarks and algorithms do not consider solving multiple tasks that involve inherent trade-offs. Consequently, there is no notion of Pareto efficiency and an agent can simultaneously excel in all the tasks without accounting for user preferences.
Reinforcement Learning Via Supervised Learning A body of recent works have formulated offline reinforcement learning as an autoregressive sequence modeling problem using Decision Transformers (DT) or Trajectory Transformers ( Chen et al., 2021, Janner et al., 2021) The key idea in DT is to learn a transformer-based policy that conditions on the past history and a dynamic estimate of the returns (a.k.a. returns-to-go). Follow-up works consider online learning (Zheng et al., 2022) as well as simpler variants that rely only on multi-layer perceptrons (Emmons et al., 2021). Such agents are generally more stable and robust to optimize due to the simplicity of loss function and easier to scale to more complex settings such as environments with high-dimensional actions or states, as shown in recent works in multi-task RL (Lee et al., 2022; Reed et al., 2022).
3 PRELIMINARIES
Setup and Notation. We operate in the general framework of a multi-objective Markov decision process (MOMDP) with linear preferences (Wakuta, 1995). An MOMDP is represented by the tuple xS,A,P,R,Ω, f, γy. At each timestep t, the agent with a current state st P S takes an action at P A to transition into a new state st`1 with probability Ppst`1|st,atq and observes a reward vector rt “ Rpst,atq P Rn. Here, n is the number of objectives. The vector-valued return R P Rn of an agent is given by the discounted sum of reward vectors over a time horizon, R “ ř
t γ trt. We
also assume that there exists a linear utility function f and a space of preferences Ω that can map the reward vector rt and a preference vector ω P Ω to a scalar reward rt, i.e., rt “ fprt,ωq “ ω⊺rt. The expected vector return of a policy π is given an Gπ “ rGπ1 , Gπ2 , . . . , Gπns⊺ where the expected return of the ith objective is given as Gπi “ Eat`1„πp¨|st,ωqr ř
t Rpst,atqis for some predefined time horizon and preference vector ω. The goal is to train a multi-objective policy πpa|s,ωq such that the expected scalarized return ω⊺ Gπ “ Erω⊺ ř
t Rpst,atqs is maximized.
Pareto Optimality. In MORL, one cannot optimize all objectives simultaneously, so policies are evaluated based on the Pareto set of their vector-valued expected returns. Consider a preference-conditioned policy πpa|s,ωq that is evaluated for m distinct preferences ω1, . . . ,ωm, and let the resulting policy set be represented as tπpup“1,...,m, where πp “ πpa|s,ω “ ωpq, and Gπp is the corresponding unweighted expected return. We say the solution Gπp is dominated by Gπq when there is no objective for which πq is worse than πp, i.e., G πp i ă G πq i for @i P r1, 2, . . . , ns. If a solution is not dominated, it is part of the Pareto set denoted as P . The curve traced by the solutions in a Pareto set is also known as the Pareto front. In MORL, our goal is to define a policy such that its empirical Pareto set is a good approximation of the true Pareto front. While we do not know the true Pareto front for many problems, we can define metrics for relative comparisons between different algorithms. Specifically, we evaluate a Pareto set P based on two metrics, hypervolume and sparsity that we describe next.
Definition 1 (Hypervolume). Hypervolume HpP q measures the space or volume enclosed by the solutions in the Pareto set P :
HpP q “ ż
Rm 1HpP qpzq dz,
where HpP q “ tz P Z|Di : 1 ď i ď |P |, r ĺ z ĺ P piqu. P piq is the ith solution in P , ĺ is the dominance relation operator, and 1HpP qpzq equals 1 if z P HpP q and 0 otherwise. Higher hypervolumes are better. Definition 2 (Sparsity). Sparsity SpP q measures the density of the Pareto front covered by a Pareto set P :
SpP q “ 1|P | ´ 1
n ÿ
i“1
|P |´1 ÿ
k“1 pP̃ipkq ´ P̃ipk ` 1qq2,
where P̃i represents a list sorted as per the values of the ith objective in P and P̃ipkq is the kth value in the sorted list. Lower sparsity is better.
See Figure 1 for an illustration and Appendix F for discussion on other possible metrics.
4 D4MORL: DATASETS FOR OFFLINE MULTI-OBJECTIVE REINFORCEMENT LEARNING
In offline RL, the goal of an RL agent is to learn the optimal policy using a fixed dataset without any interactions with the environment (Levine et al., 2020). This perspective brings RL closer to supervised learning, where the presence of large-scale datasets has been foundational for further progress in the field. Many such data benchmarks exist for offline RL as well; a notable one is the D4RL (Fu et al., 2020) benchmark for continuous control which has led to the development of several state-of-the-art offline RL algorithms (Kostrikov et al., 2021; Kumar et al., 2020; Chen et al., 2021) that can scale favorably even in high dimensions. To the best of our knowledge, there are no such existing benchmarks for offline MORL. Even for the online setting, most works in MORL conduct evaluations on toy MDPs (e.g., gridworld) with a few exceptions that include continuous control, e.g., Chen et al. (2019); Xu et al. (2020). This calls for a much-needed push towards more challenging benchmarks for reliable evaluation of MORL, especially in the offline setting.
We introduce Datasets for Multi-Objective Reinforcement Learning (D4MORL), a large-scale benchmark for offline MORL. Our benchmark consists of offline trajectories from 6 multiobjective MuJoCo environments including 5 environments with 2 objectives each (MO-Ant, MOHalfCheetah, MO-Hopper, MO-Swimmer, MO-Walker2d), and one environment with three objectives (MO-Hopper-3obj). The objectives are conflicting for each environment; for instance, the two objectives in MO-Hopper correspond to jumping and running; in MO-HalfCheetah, MO-Swimmer, and MO-Walker2d, they correspond to the speed and energy savings of the agent. See Appendix A for more details on the semantics of the target objectives for each environment. These environments were first introduced in Xu et al. (2020) for online MORL, and as such, we use their pretrained ensemble policies as building blocks for defining new behavioral policies for dataset collection, which we discuss next.
4.1 TRAJECTORY SAMPLING
The quality of the behavioral policy used for sampling trajectories in the offline dataset is a key factor for benchmarking downstream offline RL algorithms. In existing benchmarks for single-objective RL such as D4RL (Fu et al., 2020), the quality of a behavioral policy can be ascertained and varied based on its closeness to a single expert policy, as measured by its scalar-valued returns. For a MOMDP, we do not have the notion of a scalar return and hence, a reference expert policy (or set of policies) should reflect the optimal returns for all possible preferences in the preference space.
We use Prediction-Guided Multi-Objective Reinforcement Learning (PGMORL), a state-of-the-art MORL algorithm for defining reference expert policies. PGMORL (Xu et al., 2020) uses evolutionary algorithms to train an ensemble of policies to approximate the Pareto set. Each reference policy in the ensemble is associated with a unique preference; as for any new preference, it is mapped to the closest preference in the reference set. The number of policies in the ensemble can vary significantly; for instance, we have roughly 70 reference policies for MO-Antand 2445 policies for harder environments such as MO-Hopper-3obj. Given a desired preference, we define two sets of behavioral policies:
1. Expert Dataset: We find the best reference policy in the policy ensemble, and always follow the action taken by the selected reference policy.
2. Amateur Dataset: As before, we first find the best reference policy in the policy ensemble. With a fixed probability p, we randomly perturb the actions of the reference policies. Otherwise, with probability 1 ´ p, we take the same action as the reference policy. In D4MORL, we set p “ 0.65.
Further details are described in Appendix C. In Figure 2, we show the returns of the trajectories rolled out from the expert and amateur policies for the 2 objective environments evaluated for a uniform sampling of preferences. We can see that the expert trajectories typically dominate the amateur trajectories, as desired. For the amateur trajectories, we see more diversity in the empirical returns for both objectives under consideration. The return patterns for the amateur trajectories vary across different environments providing a diverse suite of datasets in our benchmark.
4.2 PREFERENCE SAMPLING
The coverage of any offline dataset is an important factor in dictating the performance of downstream offline RL algorithms (Levine et al., 2020). For MORL, the coverage depends on both the behavioral MORL policy as well as the distribution of preferences over which this policy is evaluated. We use the following protocols for sampling from the preference space Ω. First, we restrict our samples to lie within a physically plausible preference space Ω˚ Ď Ω covered by the behavioral policy πβ . For instance, MO-Hopper has two objectives: jumping and running. Since the agent can never gain running rewards without leaving the floor. Thus, the preference of 100% running and 0% jumping is not achievable and excluded from our preference sampling distribution.
Second, we are primarily interested in offline trajectories that emphasize competition between multiple objectives rather than focusing on a singular objective. To enforce this criterion, we define 3 sampling distributions concentrated around the centroid of the preference simplex. The largest spread distribution samples uniformly from Ω˚ and is denoted as High-Entropy (High-H). Next, we have a Medium-Entropy (Med-H) distribution specified via samples of Dirichlet distributions with large values of their concentration hyperparameters (aka α). Finally, we have a Low-Entropy (Low-H) distribution that is again specified via samples of Dirichlet distributions but with low values of their concentration hyperparameters. We illustrate the samples for each of the preference distributions along with their empirical entropies in Figure 3. Further details on the sampling distributions are deferred to Appendix B. By ensuring different levels of coverage, we can test the generalizability of an MORL policy to preferences unseen during training. In general, we expect Low-H to be the hardest of the three distributions due to its restricted coverage, followed by Med-H and High-H.
Overall Data Generation Pipeline. The pseudocode for generating the dataset is described in Algorithm 1. Given a preference distribution, we first sample a preference ω and query the closest behavioral policy in either the amateur/expert ensemble matching ω. We roll out this policy for T time steps (or until the end of an episode if sooner) and record the state, action, and reward information. Each trajectory in our dataset is represented as:
τ “ă ω, s1,a1, r1, . . . , sT ,aT , rT ą
Algorithm 1 Data Collection in D4MORL procedure COLLECT(prefDist, nTraj, env, pretrainedAgents, T)
agents = pretrainedAgents prefs = prefDist(nTraj) all trajs = [] for ω in prefs do
agent = closestAgent(agents, ω) s = env.reset() done = False τ = [ω] t = 0 while (NOT done) AND (t ă T) do
a = agent.get action(s) s1, done, r = env.step(a) append s, a, s1, r to τ s = s1 t = t + 1
append τ to all trajs return all trajs
For every environment in D4MORL, we collect 50K trajectories of length T “ 500 for both expert and amateur trajectory distributions under each of the 3 preference distributions. Overall, this results in a total of 1.8M trajectories over all 6 environments, which corresponds to roughly 867M time steps. We refer the reader to Table 5 in Appendix B for additional statistics on the dataset.
5 PARETO-EFFICIENT DECISION AGENTS (PEDA)
In this section, we propose Pareto-Efficient Decision Agents (PEDA), a new family of offline multi-objective RL agents. PEDA aims to achieve Pareto-efficiency by extending Decision Transformers (Chen et al., 2021) into multi-objective setting. We first introduce the architecture of Decision Transformers (DT) and its variant, Reinforcement Learning Via Supervised Learning (RvS), followed by our modifications extending them to the multi-objective setting.
DT casts offline RL as a conditional sequence modeling problem that predicts the next action by conditioning a transformer on past states, actions, and desired returns. The desired returns are defined as returns-to-go (RTG) gt “ řT t1“t rt1 , the future returns that this action is intended to achieve. Therefore, the trajectory is represented by τ “ă s1,a1, g1, . . . , sT ,aT , gT ą. In practice, we use a causally masked transformer architecture such as GPT (Radford et al., 2019) to process this sequence and predict the actions by observing the past K timesteps consisting of 3K tokens. DT and its variants have been shown to be more stable and robust to optimize due to the simplicity of loss function; easier to scale to more complex settings such as environments with high-dimensional actions or states, and agents with broad capabilities such as multitask settings (Lee et al., 2022). Hence, we adopt Decision Transformers (Chen et al., 2021) as the representative base algorithm on which we build our work.
In follow-up work, Emmons et al. (2021) extend DT and shows that even multi-layer perceptrons conditioned on the average returns-to-go can achieve similar performance without the use of transformers. They call their model Reinforcement Learning Via Supervised Learning (RvS). However, RvS is generally not very stable when conditioned on very large returns, unlike DT.
5.1 MULTI-OBJECTIVE REINFORCEMENT LEARNING VIA SUPERVISED LEARNING
In PEDA, our goal is to train a single preference-conditioned agent for offline MORL. By including preference conditioning, we enable the policy to be trained on arbitrary offline data, including trajectories collected from behavioral policies that are associated with alternate preferences. To parameterize our policy agents, we extend the DT and RvS architectures to include preference tokens and vector-valued returns. We refer to such preference-conditioned extensions of these architectures as MODT(P) and MORVS(P) respectively, which we describe next.
Preference Conditioning. Naively, we can easily incorporate the preference ω into DT by adding this token for each timestep and feeding it a separate embedding layer. However, empirically we find that such a model design tends to ignore ω and the correlation between the preferences and predicted actions is weak. Therefore, we propose to concatenate ω to other tokens before any layers in MODT(P). Concretely, we define s˚ “ s À ω, a˚ “ a À ω, and g˚ “ g À ω where À denotes the concatenation operator. Hence, triples of s˚, a˚, g˚ form the new trajectory. As for MORVS(P), we concatenate the preference with the states and the average RTGs by default and the network interprets everything as one single input.
Multi-Objective Returns-to-Go. Similar to RTG for the single objective case, we can define vector-valued RTG as gt “ řT t1“t rt1 Given a preference vector ω, we can scalarize the total returns-to-go as ĝt “ ωTgt. In principle, the scalarized RTG ĝt can be recovered given the preference vector ω and the vector-valued RTG gt. However, empirically we find that directly feeding MODT/MORVS with the preference-weighted RTG vector gt d ω is slightly preferable for stable training, where d denotes the elementwise product operator. Another unique challenge in the MORL setting concerns the scale of different objectives. Since different objectives can signify different physical quantities (e.g., energy and speed), the choice of scaling can influence policy optimization. We adopt a simple normalization scheme, where the returns for each objective are normalized by subtracting the minimum observed value for that objective and dividing it by the range of values (max-min). Note that the maximum and minimum are computed based on the offline dataset and hence, they are not necessarily the true min/max objective values. Post this normalization, the values for every objective in the trajectory are on the same scale between 0 and 1. For evaluating the hypervolume and sparsity, we use the unnormalized values so that we can make comparisons across different datasets that may have different min/max boundaries.
Training. We follow a simple supervised training procedure where we train the policies on randomly sampled mini-batches with MSE loss (for continuous actions). In MODT and MODT(P), the input states, actions, and returns-to-go (with concatenated preferences) are treated as tokens and embedded through one layer of MLP. We apply a layer of MLP and Tanh on the last hidden state of GPT-2 transformer to predict next action. In MORVS and MORVS(P), we use only information from the current timestep and MLP layers to predict the next action.
6 EXPERIMENTS
In this section, we evaluate the performance of PEDA on D4MORL benchmark. First, we investigate the benefits of preference conditioning by evaluating on decision transformers (DT) and RvS (MORVS) where no preference information is available and we scalarize multi-objective vector returns into weighted sums. We denote our methods with preference conditioning as MODT(P) and MORVS(P). Second, we compare our methods with classic imitation learning and temporal difference learning algorithms with preference conditioning.
Imitation learning. Imitation learning simply uses supervised loss to train a mapping from states (w/ or w/o concatenating preferences) to actions. We use behavioral cloning (BC) here and train multi-layer MLPs as models named BC (w/o preference) and BC(P) (w/ preference).
Temporal difference learning. Conservative Q-Learning (CQL) (Kumar et al., 2020) is the stateof-the-art standard offline RL method, which learns a conservative Q-function f : S ˆ A Ñ R through neural networks. We modify the network architecture such that it also takes preference vectors as inputs to learn a preference-conditioned Q-function f˚ : S ˆ A ˆ Ω Ñ R. We denote this method as CQL(P).
6.1 MULTI-OBJECTIVE OFFLINE BENCHMARK
Hypervolume. We compare hypervolume of our methods with all baselines on expert datasets in Table 1 as well as amateur dataset in Table 2. For the two-objective environments, we evaluate the
models on 501 equally spaced preference points in the range [0, 1]; on the three-objective environment MO-Hopper-3obj, models are evaluated on 325 equally spaced points. Each point is evaluated 5 times with random environment re-initialization, and the median value is recorded. Finally, all the results are based on 3 random seeds and we report the mean performance along with the standard error. In Table 1 and Table 2, we can see that MODT(P) and MORVS(P) outperform other baselines and has a relatively very low standard error. Also, PEDA variants including MODT(P) and MORVS(P) approaches the behavioral policy upper-bound.
Sparsity. We also evaluate sparsity performance. Since sparsity comparison is only meaningful between models that are sensitive to preference and have a relatively similar hypervolume performance, we only show results for models that concatenate preference. Overall, MORVS(P) has the lowest sparsity in most environments, while at the same time featuring an outstanding hypervolume.
6.2 ABLATION STUDY
Pareto front approximation. We ablate how well the MODT(P) and MORVS(P) can approximate the Pareto front through conditioning on different preference points. We show the results in Figure 4, where we can see that the models can approximate the Pareto front while having some dominated points colored in pink mostly in the MO-Hopper and MO-Walker2d environments. The results are based on the average of 3 seeds, and the full plot can be found in Appendix G.
Return distribution. We ablate how well MODT(P) and MORVS(P) follow their given target return, based on a normalized and weighted value. We present the results in Figure 5 for MORVS(P) under High-H-Expert datasets and refer to Appendix H for full settings. Here, we see that the models follow the oracle line nicely when conditioned on the target within the dataset distribution, and generalize to targets outside of the dataset distribution as well.
7 CONCLUSION
We proposed a new problem setup for offline Multi-Objective Reinforcement Learning to scale Pareto-Efficient decision-making using offline datasets. To characterize progress, we introduced D4MORL, a dataset benchmark consisting of offline datasets generated from behavioral policies of different fidelities (expert/amateur) and rolled out under preference distributions with varying entropies (high/medium/low). Then, we propose PEDA, a family of offline MORL policy optimization algorithms based on decision transformers. To our knowledge, the PEDA variants are the first offline MORL policies that support continuous action and preference spaces. We showed that by concatenating and embedding preferences together with other inputs, our policies can effectively approximate the Pareto front of the underlying behavioral policy as measured by the hypervolume and sparsity metrics. Our proposed family includes MLP and transformer-based variants, viz. the MORVS(P) and MODT(P), with MORVS(P) performing the best overall. In some scenarios, the learned policies can also generalize to higher target rewards that exceed the data distribution.
REPRODUCIBILITY STATEMENT
Our code is available at: https://github.com/baitingzbt/PEDA.
ACKNOWLEDGEMENTS
AG’s research is supported by a Meta Research Award and a Cisco grant.
A ENVIRONMENT DESCRIPTION
All environments are the same as in Xu et al., 2020, except for when resetting the environment, each parameter is uniformly sampled from the rx´ 10´3, x` 10´3s with x being the default value. Except we always reset height as 1.25 for MO-Hopper and MO-Hopper-3objsince this parameter directly relates to the reward function. All environments have a max episode length of 500 steps per trajectory, but the agent may also die before reaching the maximum length.
A.1 MO-ANT
The two objectives in MO-Ant are achieved distance in x and y axes respectively, denoted as r “ rrvxt , r vy t s⊺.
Consider the position of the agent is represented as pxt, ytq at time t and takes the action at. The agent has a fixed survival reward rs “ 1.0, dt “ 0.05, and an action cost of ra “ 12 ř k a 2 k. The rewards are calculated as:
rvxt “ pxt ´ xt´1q { dt ` rs ´ ra rvyt “ pyt ´ yt´1q { dt ` rs ´ ra (1)
A.2 MO-HALFCHEETAH
The two objectives in MO-HalfCheetah are running speed, and energy saving, denoted as r “ rrvt , ret s⊺. Consider the position of the agent is represented as pxt, ytq at time t and takes the action at. The agent has a fixed survival reward rs “ 1.0, fixed dt “ 0.05, and an action cost of ra “ ř
k a 2 k. The
rewards are calculated as:
rvt “ mint4.0, pxt ´ xt´1q { dtu ` rs ret “ 4.0 ´ ra ` rs (2)
A.3 MO-HOPPER
The two objectives in MO-Hopper are running and jumping, denoted as r “ rrr, rjs⊺. Consider the position of the agent is represented as pxt, htq at time t and takes the action at. The agent has a fixed survival reward rs “ 1.0, a fixed initial height as hinit “ 1.25, a fixed dt “ 0.01, and an action cost of ra “ 2 ˆ 10´4 ř
k a 2 k. The rewards are calculated as:
rrt “ 1.5 ˆ pxt ´ xt´1q { dt ` rs ´ ra rjt “ 12 ˆ pht ´ hinitq { dt ` rs ´ ra (3)
A.4 MO-HOPPER-3OBJ
The physical dynamics are the same in MO-Hopper and MO-Hopper-3obj, while this environment has 3 objectives: running, jumping, and energy saving. The rewards are denoted as r “ rrr, rj , res⊺. Consider the position of the agent is represented as pxt, htq at time t and takes the action at. The agent has a fixed survival reward rs “ 1.0, a fixed initial height as hinit “ 1.25, a fixed dt “ 0.01, and an action cost of ra “ ř
k a 2 k. The rewards are calculated as:
rrt “ 1.5 ˆ pxt ´ xt´1q { dt ` rs
rjt “ 12 ˆ pht ´ hinitq { dt ` rs ret “ 4.0 ´ ra ` rs (4)
A.5 MO-SWIMMER
The two objectives in MO-Swimmer are speed and energy saving, denoted as r “ rrv, res⊺. Consider the position of the agent is represented as pxt, ytq at time t and takes the action at. The agent has a fixed dt “ 0.05, and an action cost of ra “ ř
k a 2 k. The rewards are calculated as:
rvt “ pxt ´ xt´1q { dt ret “ 0.3 ´ 0.15 ˆ ra
(5)
A.6 MO-WALKER2D
The objectives in MO-Walker2d are speed and energy saving, denoted as r “ rrv, res⊺. Consider the position of the agent is represented as pxt, ytq at time t and takes the action at. The agent has a fixed survival reward rs “ 1.0, a fixed dt “ 0.008, and an action cost of ra “ ř
k a 2 k.
The rewards are calculated as:
rvt “ pxt ´ xt´1q { dt ` rs ret “ 4.0 ´ ra ` rs (6)
To uniformly sample the High-H data from the entire preference space, the problem is equivalent to sampling from a n-dimensional simplex, where n is the number of objectives. The resulting sampling is:
ωhigh „ ||fexpp ¨ , λ “ 1q||1 (7)
We take the 1-norm following the exponential distribution to make sure each preference add up to 1. When Ω˚ ‰ Ω, we perform rejection sampling to restrict the range. To sample the Med-H and Low-H data, we first sample α from a non-negative uniform distribution, then sample the corresponding Dirichlet preference. Here, we sample a different alpha to make sure the center of the Dirichlet changes and thus allows more variation.
ωmed „ fDirichletpαq ; where α „ Unifp0, 106q ωlow „ fDirichletpαq ; where α „ Unifp1{3 ˆ 106, 2{3 ˆ 106q
(8)
For sampling from behavioral policy consists of a group of single-objective policies πβ “ tπ1, . . . , πBu with B being the total number of candidate policies, we recommend first find the expected unweighted raw rewards Gπ1 , . . . ,GπB . Then, find the estimated ω̂π1 , . . . , ω̂πB by letting ω̂πbi “ G πb i
řn j“1 G πb j
, which represents the estimated preference on ith objective of bth candidate
policy. For a sampled preference ω „ Ω˚, use the policy that provides the smallest euclidean distance dpω, ω̂πbq. Empirically, this means picking the candidate policy that has the expected reward ratio closest to ω.
B DATASET DETAILS
To uniformly sample the High-H data from the entire preference space, we can equivalently sample from a n-dimensional simplex, where n is the number of objectives. The resulting sampling scheme is:
ωhigh „ ||fexpp ¨ , λ “ 1q||1 (9)
The 1-norm following the exponential distribution makes sure each preference vector have entries add up to 1. When Ω˚ ‰ Ω, we perform rejection sampling to restrict the range.
To sample the Med-H and Low-H data, we first sample α from a non-negative uniform distribution, then sample the corresponding Dirichlet preference. Here, we sample a different alpha to make sure the mode of the Dirichlet changes and thus allows more variation.
ωmed „ fDirichletpαq ; where α „ Unifp0, 106q ωlow „ fDirichletpαq ; where α „ Unifp1{3 ˆ 106, 2{3 ˆ 106q
(10)
Since our behavioral policy is consists of a group of single-objective policies πβ “ tπ1, . . . , πBu with B being the total number of candidate policies, we first find the expected unweighted raw rewards Gπ1 , . . . ,GπB . Then, we find the estimated preferences ω̂π1 , . . . , ω̂πB by letting ω̂πbi “
G πb i
řn j“1 G πb j on ith objective of bth candidate policy. For each sampled preference ω „ Ω˚ following (9) or (10), we sample a complete trajectory using the single-objective behavioral policy that provides the smallest euclidean distance min dpω, ω̂πbq. Empirically, this means picking the candidate policy that has the expected reward ratio closest to ω.
C EXPERT & AMATEUR DATASETS
In Expert collection, we sample trajectories using the fully-trained behavioral policy πβ . In this paper, we use PGMORL by Xu et al., 2020 as our behavioral policy πβ
aexpertt`1 “ πβpa|s “ st,ω “ ωtq (11)
In the Amateur collection, the policies has a 35% chance being stochastic on top of the expert collection. Actions has a chance being stochastic, during which it is scaled from the expert action, as following:
aamateurt`1 “ " aexpertt`1 35% aexpertt`1 ˆ Unifp0.35, 1.65q 65%
(12)
In the MO-Swimmer environment only, we let actions has a 35% chance to be a uniform random sample from the entire action space rather than being the same as expert to increase variance and achieve a performance similar to amateur. The resulting strategy for MO-Swimmer is:
aamateurt`1 “ " UnifpAq 35% aexpertt`1 ˆ Unifp0.35, 1.65q 65%
(13)
D FINDING APPROPRIATE MULTI-OBJECTIVE RTG
In Decision Transformer Chen et al. (2021) and Emmons et al. (2021), RTG denotes the future desired reward. In MORL, however, designing appropriate multi-objective RTG is necessary. On top
of discounting each objective’s desired reward separately, we empirically find that since some objectives are inherently conflicting, setting RTG high for one objective means we should accordingly lower RTG for other objectives (i.e. we shouldn’t use maximum RTG for both). In this way, our test-time RTG can follow closer to the training distribution.
In this paper, we use linear regression G “ fpωq to find corresponding RTG conditioned on the given preference. Figure 6 demonstrates the weighted RTG of the “running” objective as a function of its preference in MO-Hopper where the conflicting objectives are “running” and “jumping”. It is clear that RTG closely correlates with the conditioned preference for running and we should adjust the initial RTG during test-time accordingly.
Finally, we only use the learned linear regression model from Expert dataset. This is because regression models fitted on sub-optimal data can easily produce an RTG lower than optimal. In practice, we can easily achieve a similar result by training the regression model only on the best-performing trajectories for respective preferences. Other regression or clustering methods to find appropriate RTG can also work, and we leave it as future works especially when not assuming linearly weighted objectives.
E TRAINING DETAILS
In this section, we list our hyper-parameters and model details. In specific, we use the same hyperparameters for all algorithms, except for the learning rate scheduler and warm-up steps. In MODT family, inputs are first embedded by a 1-layer fully-connected network, and n layer represents the number of transformer blocks; in BC family, n layer represents the number of MLP layers to embed each input; in MORVS and MORVS(P), we leverage the same embedding strategy in Emmons et al. (2021). Additionally, we consider MORVS and MORVS(P) both have context length of 1 because they only use the current state to predict the next action, whereas MODT and BC use the past 20.
E.1 PARAMETERS
Hyperparameter MODT MORvS BC
Context Length - K 20 1 20 Batch Size 64
Hidden Size 512 Learning Rate 1e-4 Weight Decay 1e-3
Dropout 0.1 n layer 3
Optimizer AdamW Loss Function MSE LR Scheduler lambda None lambda
Warm-up Steps 10000 N/A 4000 Activation ReLU
E.2 TRAINING STEPS
Dataset Name MODT Steps RvS/BC Steps
MO-Ant 20K 200K MO-HalfCheetah 80K 200K
MO-Hopper 400K 200K MO-Hopper-3obj 400K 200K
MO-Swimmer 260K 200K MO-Walker2d 360K 200K
E.3 OTHER ATTEMPTED ARCHITECTURES FOR MODT AND MODT(P)
We tried the following MODT architectures in our preliminary experiments. We picked Case 4 eventually as it gave the best performance in our experiments.
1. Consider ω as an independent token of the transformer.
2. Train a separate embedding for ω, concatenate the embeddings to get fϕspsq À fϕω pωq, fϕapaq À fϕω pωq, and fϕg pgq À
fϕω pωq then pass into the transformer. 3. Add another MLP layer on top of Case 2 after concatenation, then pass output into the
transformer.
4. Concatenate ω to other tokens before any layers. This means we have s˚ “ s À
ω, a˚ “ a À ω, and g˚ “ g À ω.
F OTHER EVALUATION METRICS
Among a variety of metrics for MORL, we use Hypervolume (HV) and Sparsity (SP) to benchmark models for several reasons. First, many metrics such as the ϵ-metric require prior knowledge of the true Pareto Fronts, which are not available for our MuJoCo Environments. Second, we only assume linear reward function and cannot collect real-time user feedback, thus utility-based metrics such as expected utility metric (EUM) are not applicable. Finally, using the same metric as in the original behavioral policy paper facilitate algorithm comparisons.
G PARETO SET VISUALIZATIONS
We present the Pareto Set visualizations for all of our models trained under each High-H dataset in Figure 7. Each point in each subplot is based on the average result of 3 seeds. In 2 objective environments, we evaluate the model using 501 equally spaced preference points. In 3 objective environments, we use 351 equally spaced preference points instead. Since the environments are stochastically initialized, we evaluate 5 times at each preference point and take the mean value. This makes each point the average value of 15 runs. We here allow a small tolerance for coloring the dominated points.
If a preference point is within the achievable preference Ω˚ but the solution is dominated, we color it in red. Since our models are conditioned on continuous preference points and environments are initialized stochastically, we give a small tolerance (3%-8%) for points to be colored in blue. The hypervolume and sparsity metrics, on the other hand, are based on strictly undominated solutions without tolerance.
H MEDIUM & LOW ENTROPY DATASET TRAINING
We train on the Medium-Entropy and Low-Entropy datasets for the MO-HalfCheetah environment. Overall, models have a similar performance under Med-H and High-H datasets but suffer when only trained on Low-H. We present the results in Table 6, in which we illustrate that the Low-H dataset has a worse expert and amateur performance due to reduced variability on preference. However, MODT(P) and MORVS(P) are still able to get close to or exceed in hypervolume on all
datasets, which showcases the effectiveness of PEDA as an efficient MORL policy. Results are based on an average of 3 seeds with the standard error given.
I TRAINING WITH 1-DIM RTG
We attempted to train MODT and RvS with 1-dim return-to-go rather than a separate rtg for each objective. According to results on MO-HalfCheetah and the High-H datasets in 7, using multidimensional RTG enhances the performance of MODT(P), and are about the same for MORVS(P) when preference is concatenated to states. However, it reduces standard error significantly in both MODT(P) and MORVS(P). In the naive models when preferences are not concatenated to states, using a multi-dimensional RTG helps to achieve a much more competitive hypervolume. We thus believe multi-dimensional RTG conveys important preference information when the model doesn’t directly take preference as an input. Results are based on an average of 3 seeds with the standard error given. | 1. What is the focus and contribution of the paper regarding multi-objective reinforcement learning?
2. What are the strengths of the proposed approach, particularly in handling different preferences over objectives?
3. What are the weaknesses of the paper, especially regarding the dataset and its limitations in translating to organic multi-objective settings?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
In this paper the authors propose a flexible multi-objective RL framework that can handle different preferences over objectives, that are not known in training time. They achieve this with an offline RL approach. First, they present a novel dataset of demonstrations designed for offline multi-objective settings. Second, they propose a Pareto-efficient decision mechanism that builds on Decision Transformers with a novel approach for conditioning the policies on preferences and returns. The authors continue to show that their proposed framework closely matches the behavior policies in the demonstrations, meaning it effectively approximates the Pareto front.
Strengths And Weaknesses
strengths
An interesting approach to an interesting (and nontrivial) problem.
The idea of a sample-efficient way for approximating the Pareto front at execution time seems quite promising.
Weaknesses
The dataset itself doesn't seem like that much of a contribution, compared to how it is framed in the paper.
Adding random perturbations to expert demonstrations is not a good way of approximating amateur demonstrations.
It's unclear how this translates to a more organic multi-objective setting (consider a navigation task where vehicles need to maximize transport throughput while minimizing risk and travel time).
Not clear how well this would generalize to remote parts of the trajectory space (and preference space).
Clarity, Quality, Novelty And Reproducibility
Overall the paper seems fairly well written and organized. The approach seems fairly novel (though incrementally so). Reproducibility seems like it might be challenging from the paper alone, but not egregiously outside the current standard for ML publications. |
ICLR | Title
Scaling Pareto-Efficient Decision Making via Offline Multi-Objective RL
Abstract
The goal of multi-objective reinforcement learning (MORL) is to learn policies that simultaneously optimize multiple competing objectives. In practice, an agent’s preferences over the objectives may not be known apriori, and hence, we require policies that can generalize to arbitrary preferences at test time. In this work, we propose a new data-driven setup for offline MORL, where we wish to learn a preference-agnostic policy agent using only a finite dataset of offline demonstrations of other agents and their preferences. The key contributions of this work are two-fold. First, we introduce D4MORL, (D)datasets for MORL that are specifically designed for offline settings. It contains 1.8 million annotated demonstrations obtained by rolling out reference policies that optimize for randomly sampled preferences on 6 MuJoCo environments with 2-3 objectives each. Second, we propose Pareto-Efficient Decision Agents (PEDA), a family of offline MORL algorithms that builds and extends return-conditioned offline methods including Decision Transformers (Chen et al., 2021) and RvS (Emmons et al., 2021) via a novel preference-and-return conditioned policy. Empirically, we show that PEDA closely approximates the behavioral policy on the D4MORL benchmark and provides an excellent approximation of the Pareto-front with appropriate conditioning, as measured by the hypervolume and sparsity metrics.
1 INTRODUCTION
We are interested in learning agents for multi-objective reinforcement learning (MORL) that optimize for one or more competing objectives. This setting is commonly observed in many real-world scenarios. For instance, an autonomous driving car might trade off high speed and energy savings depending on the user’s preferences. If the user has a relatively high preference for speed, the agent will move fast regardless of power usage; on the other hand, if the user tries to save energy, the agent will keep a more steady speed. One key challenge with MORL is that different users might have different preferences on the objectives and systematically exploring policies for each preference might be expensive, or even impossible. In the online setting, prior work considers several approximations based on scalarizing the vector-valued rewards of different objectives based on a single preference (Lin, 2005), learning an ensemble of policies based on enumerating preferences (Mossalam et al., 2016, Xu et al., 2020), or extensions of single-objective algorithms such as Q-learning to vectorized value functions (Yang et al., 2019).
We introduce the setting of offline multi-objective reinforcement learning for high-dimensional state and action spaces, where our goal is to train an MORL policy agent using an offline dataset of demonstrations from multiple agents with known preferences. Similar to the single-task setting, offline MORL can utilize auxiliary logged datasets to minimize interactions, thus improving data efficiency and minimizing interactions when deploying agents in high-risk settings. In addition to its practical utility, offline RL (Levine et al., 2020) has enjoyed major successes in the last few years (Kumar et al., 2020, Kostrikov et al., 2021, Chen et al., 2021) on challenging high-dimensional environments for continuous control and game-playing. Our contributions in this work are two-fold in introducing benchmarking datasets and a new family of MORL, as described below.
We introduce Datasets for Multi-Objective Reinforcement Learning (D4MORL), a collection of 1.8 million trajectories on 6 multi-objective MuJoCo environments (Xu et al., 2020). Here, 5 environ-
ments consist of 2 objectives and 1 environment consists of 3 objectives. For each environment in D4MORL, we collect demonstrations from 2 pretrained behavioral agents: expert and amateur, where the relative expertise is defined in terms of the Pareto-efficiency of the agents and measured empirically via their hypervolumes. Furthermore, we also include 3 kinds of preference distributions with varying entropies to expose additional data-centric aspects for downstream benchmarking. Lack of MORL datasets and large-scale benchmarking has been a major challenge for basic research (Hayes et al., 2022), and we hope that D4MORL can aid future research in the field.
Next, we propose Pareto-Efficient Decision Agents (PEDA), a family of offline MORL algorithms that extends return-conditioned methods including Decision Transformer (DT) (Chen et al., 2021) and RvS (Emmons et al., 2021) to the multi-objective setting. These methods learn a returnconditioned policy via a supervised loss on the predicted actions. In recent work, these methods have successfully scaled to agents that demonstrate broad capabilities in multi-task settings (Lee et al., 2022 Reed et al., 2022). For MORL, we introduce a novel preference and return conditioned policy network and train it via a supervised learning loss. At test time, naively conditioning on the default preferences and maximum possible returns leads to out-of-distribution behavior for the model, as neither has it seen maximum returns for all objectives in the training data nor is it possible to simultaneously maximize all objectives under competition. We address this issue by learning to map preferences to appropriate returns and hence, enabling predictable generalization at test-time.
Empirically, we find PEDA performs exceedingly well on D4MORL and closely approximates the reference Pareto-frontier of the behavioral policy used for data generation. In the multi-objective HalfCheetah environment, compared with an average upper bound on the hypervolume of 5.79ˆ106 achieved by the behavioral policy, PEDA achieves an average hypervolume of 5.77 ˆ 106 on the Expert and 5.76 ˆ 106 on the Amateur datasets.
2 RELATED WORK
Multi-Objective Reinforcement Learning Predominant works in MORL focus on the online setting where the goal is to train agents that can generalize to arbitrary preferences. This can be achieved by training a single preference-conditioned policy (Yang et al., 2019; Parisi et al., 2016), or an ensemble of single-objective policies for a finite set of preferences (Mossalam et al., 2016; Xu et al., 2020; Zhang & Li, 2007). Many of these algorithms consider vectorized variants of standard algorithms such as Q-learning (Mossalam et al., 2016; Yang et al., 2019), often augmented with strategies to guide the policy ensemble towards the Pareto front using evolutionary or incrementally updated algorithms (Xu et al., 2020; Zhang & Li, 2007; Mossalam et al., 2016; Roijers et al., 2014; Huang et al., 2022). Other approaches have also been studied, such as framing MORL as a meta-learning problem (Chen et al., 2019), learning the action distribution for each objective (Abdolmaleki et al., 2020), and learning the relationship between objectives (Zhan & Cao, 2019) among others. In contrast to these online MORL works, our focus is on learning a single policy that works for all preferences using only offline datasets.
There are also a few works that study decision-making with multiple objectives in the offline setting and sidestep any interaction with the environments. Wu et al., 2021 propose a provably efficient offline MORL algorithm for tabular MDPs based on dual gradient ascent. Thomas et al., 2021 study learning of safe policies by extending the approach of Laroche et al., 2019 to the offline MORL setting. Their proposed algorithm assumes knowledge of the behavioral policy used to collect the offline data and is demonstrated primarily on tabular MDPs with finite state and action spaces. In contrast, we are interested in developing dataset benchmarks and algorithms for scalable offline policy optimization in high-dimensional MDPs with continuous states and actions.
Multi-Task Reinforcement Learning MORL is also closely related to multi-task reinforcement learning, where every task can be interpreted as a distinct objective. There is an extensive body of work in learning multi-task policies both in the online and offline setups (Wilson et al., 2007; Lazaric & Ghavamzadeh, 2010; Teh et al., 2017) inter alia. However, the key difference is that typical MTRL benchmarks and algorithms do not consider solving multiple tasks that involve inherent trade-offs. Consequently, there is no notion of Pareto efficiency and an agent can simultaneously excel in all the tasks without accounting for user preferences.
Reinforcement Learning Via Supervised Learning A body of recent works have formulated offline reinforcement learning as an autoregressive sequence modeling problem using Decision Transformers (DT) or Trajectory Transformers ( Chen et al., 2021, Janner et al., 2021) The key idea in DT is to learn a transformer-based policy that conditions on the past history and a dynamic estimate of the returns (a.k.a. returns-to-go). Follow-up works consider online learning (Zheng et al., 2022) as well as simpler variants that rely only on multi-layer perceptrons (Emmons et al., 2021). Such agents are generally more stable and robust to optimize due to the simplicity of loss function and easier to scale to more complex settings such as environments with high-dimensional actions or states, as shown in recent works in multi-task RL (Lee et al., 2022; Reed et al., 2022).
3 PRELIMINARIES
Setup and Notation. We operate in the general framework of a multi-objective Markov decision process (MOMDP) with linear preferences (Wakuta, 1995). An MOMDP is represented by the tuple xS,A,P,R,Ω, f, γy. At each timestep t, the agent with a current state st P S takes an action at P A to transition into a new state st`1 with probability Ppst`1|st,atq and observes a reward vector rt “ Rpst,atq P Rn. Here, n is the number of objectives. The vector-valued return R P Rn of an agent is given by the discounted sum of reward vectors over a time horizon, R “ ř
t γ trt. We
also assume that there exists a linear utility function f and a space of preferences Ω that can map the reward vector rt and a preference vector ω P Ω to a scalar reward rt, i.e., rt “ fprt,ωq “ ω⊺rt. The expected vector return of a policy π is given an Gπ “ rGπ1 , Gπ2 , . . . , Gπns⊺ where the expected return of the ith objective is given as Gπi “ Eat`1„πp¨|st,ωqr ř
t Rpst,atqis for some predefined time horizon and preference vector ω. The goal is to train a multi-objective policy πpa|s,ωq such that the expected scalarized return ω⊺ Gπ “ Erω⊺ ř
t Rpst,atqs is maximized.
Pareto Optimality. In MORL, one cannot optimize all objectives simultaneously, so policies are evaluated based on the Pareto set of their vector-valued expected returns. Consider a preference-conditioned policy πpa|s,ωq that is evaluated for m distinct preferences ω1, . . . ,ωm, and let the resulting policy set be represented as tπpup“1,...,m, where πp “ πpa|s,ω “ ωpq, and Gπp is the corresponding unweighted expected return. We say the solution Gπp is dominated by Gπq when there is no objective for which πq is worse than πp, i.e., G πp i ă G πq i for @i P r1, 2, . . . , ns. If a solution is not dominated, it is part of the Pareto set denoted as P . The curve traced by the solutions in a Pareto set is also known as the Pareto front. In MORL, our goal is to define a policy such that its empirical Pareto set is a good approximation of the true Pareto front. While we do not know the true Pareto front for many problems, we can define metrics for relative comparisons between different algorithms. Specifically, we evaluate a Pareto set P based on two metrics, hypervolume and sparsity that we describe next.
Definition 1 (Hypervolume). Hypervolume HpP q measures the space or volume enclosed by the solutions in the Pareto set P :
HpP q “ ż
Rm 1HpP qpzq dz,
where HpP q “ tz P Z|Di : 1 ď i ď |P |, r ĺ z ĺ P piqu. P piq is the ith solution in P , ĺ is the dominance relation operator, and 1HpP qpzq equals 1 if z P HpP q and 0 otherwise. Higher hypervolumes are better. Definition 2 (Sparsity). Sparsity SpP q measures the density of the Pareto front covered by a Pareto set P :
SpP q “ 1|P | ´ 1
n ÿ
i“1
|P |´1 ÿ
k“1 pP̃ipkq ´ P̃ipk ` 1qq2,
where P̃i represents a list sorted as per the values of the ith objective in P and P̃ipkq is the kth value in the sorted list. Lower sparsity is better.
See Figure 1 for an illustration and Appendix F for discussion on other possible metrics.
4 D4MORL: DATASETS FOR OFFLINE MULTI-OBJECTIVE REINFORCEMENT LEARNING
In offline RL, the goal of an RL agent is to learn the optimal policy using a fixed dataset without any interactions with the environment (Levine et al., 2020). This perspective brings RL closer to supervised learning, where the presence of large-scale datasets has been foundational for further progress in the field. Many such data benchmarks exist for offline RL as well; a notable one is the D4RL (Fu et al., 2020) benchmark for continuous control which has led to the development of several state-of-the-art offline RL algorithms (Kostrikov et al., 2021; Kumar et al., 2020; Chen et al., 2021) that can scale favorably even in high dimensions. To the best of our knowledge, there are no such existing benchmarks for offline MORL. Even for the online setting, most works in MORL conduct evaluations on toy MDPs (e.g., gridworld) with a few exceptions that include continuous control, e.g., Chen et al. (2019); Xu et al. (2020). This calls for a much-needed push towards more challenging benchmarks for reliable evaluation of MORL, especially in the offline setting.
We introduce Datasets for Multi-Objective Reinforcement Learning (D4MORL), a large-scale benchmark for offline MORL. Our benchmark consists of offline trajectories from 6 multiobjective MuJoCo environments including 5 environments with 2 objectives each (MO-Ant, MOHalfCheetah, MO-Hopper, MO-Swimmer, MO-Walker2d), and one environment with three objectives (MO-Hopper-3obj). The objectives are conflicting for each environment; for instance, the two objectives in MO-Hopper correspond to jumping and running; in MO-HalfCheetah, MO-Swimmer, and MO-Walker2d, they correspond to the speed and energy savings of the agent. See Appendix A for more details on the semantics of the target objectives for each environment. These environments were first introduced in Xu et al. (2020) for online MORL, and as such, we use their pretrained ensemble policies as building blocks for defining new behavioral policies for dataset collection, which we discuss next.
4.1 TRAJECTORY SAMPLING
The quality of the behavioral policy used for sampling trajectories in the offline dataset is a key factor for benchmarking downstream offline RL algorithms. In existing benchmarks for single-objective RL such as D4RL (Fu et al., 2020), the quality of a behavioral policy can be ascertained and varied based on its closeness to a single expert policy, as measured by its scalar-valued returns. For a MOMDP, we do not have the notion of a scalar return and hence, a reference expert policy (or set of policies) should reflect the optimal returns for all possible preferences in the preference space.
We use Prediction-Guided Multi-Objective Reinforcement Learning (PGMORL), a state-of-the-art MORL algorithm for defining reference expert policies. PGMORL (Xu et al., 2020) uses evolutionary algorithms to train an ensemble of policies to approximate the Pareto set. Each reference policy in the ensemble is associated with a unique preference; as for any new preference, it is mapped to the closest preference in the reference set. The number of policies in the ensemble can vary significantly; for instance, we have roughly 70 reference policies for MO-Antand 2445 policies for harder environments such as MO-Hopper-3obj. Given a desired preference, we define two sets of behavioral policies:
1. Expert Dataset: We find the best reference policy in the policy ensemble, and always follow the action taken by the selected reference policy.
2. Amateur Dataset: As before, we first find the best reference policy in the policy ensemble. With a fixed probability p, we randomly perturb the actions of the reference policies. Otherwise, with probability 1 ´ p, we take the same action as the reference policy. In D4MORL, we set p “ 0.65.
Further details are described in Appendix C. In Figure 2, we show the returns of the trajectories rolled out from the expert and amateur policies for the 2 objective environments evaluated for a uniform sampling of preferences. We can see that the expert trajectories typically dominate the amateur trajectories, as desired. For the amateur trajectories, we see more diversity in the empirical returns for both objectives under consideration. The return patterns for the amateur trajectories vary across different environments providing a diverse suite of datasets in our benchmark.
4.2 PREFERENCE SAMPLING
The coverage of any offline dataset is an important factor in dictating the performance of downstream offline RL algorithms (Levine et al., 2020). For MORL, the coverage depends on both the behavioral MORL policy as well as the distribution of preferences over which this policy is evaluated. We use the following protocols for sampling from the preference space Ω. First, we restrict our samples to lie within a physically plausible preference space Ω˚ Ď Ω covered by the behavioral policy πβ . For instance, MO-Hopper has two objectives: jumping and running. Since the agent can never gain running rewards without leaving the floor. Thus, the preference of 100% running and 0% jumping is not achievable and excluded from our preference sampling distribution.
Second, we are primarily interested in offline trajectories that emphasize competition between multiple objectives rather than focusing on a singular objective. To enforce this criterion, we define 3 sampling distributions concentrated around the centroid of the preference simplex. The largest spread distribution samples uniformly from Ω˚ and is denoted as High-Entropy (High-H). Next, we have a Medium-Entropy (Med-H) distribution specified via samples of Dirichlet distributions with large values of their concentration hyperparameters (aka α). Finally, we have a Low-Entropy (Low-H) distribution that is again specified via samples of Dirichlet distributions but with low values of their concentration hyperparameters. We illustrate the samples for each of the preference distributions along with their empirical entropies in Figure 3. Further details on the sampling distributions are deferred to Appendix B. By ensuring different levels of coverage, we can test the generalizability of an MORL policy to preferences unseen during training. In general, we expect Low-H to be the hardest of the three distributions due to its restricted coverage, followed by Med-H and High-H.
Overall Data Generation Pipeline. The pseudocode for generating the dataset is described in Algorithm 1. Given a preference distribution, we first sample a preference ω and query the closest behavioral policy in either the amateur/expert ensemble matching ω. We roll out this policy for T time steps (or until the end of an episode if sooner) and record the state, action, and reward information. Each trajectory in our dataset is represented as:
τ “ă ω, s1,a1, r1, . . . , sT ,aT , rT ą
Algorithm 1 Data Collection in D4MORL procedure COLLECT(prefDist, nTraj, env, pretrainedAgents, T)
agents = pretrainedAgents prefs = prefDist(nTraj) all trajs = [] for ω in prefs do
agent = closestAgent(agents, ω) s = env.reset() done = False τ = [ω] t = 0 while (NOT done) AND (t ă T) do
a = agent.get action(s) s1, done, r = env.step(a) append s, a, s1, r to τ s = s1 t = t + 1
append τ to all trajs return all trajs
For every environment in D4MORL, we collect 50K trajectories of length T “ 500 for both expert and amateur trajectory distributions under each of the 3 preference distributions. Overall, this results in a total of 1.8M trajectories over all 6 environments, which corresponds to roughly 867M time steps. We refer the reader to Table 5 in Appendix B for additional statistics on the dataset.
5 PARETO-EFFICIENT DECISION AGENTS (PEDA)
In this section, we propose Pareto-Efficient Decision Agents (PEDA), a new family of offline multi-objective RL agents. PEDA aims to achieve Pareto-efficiency by extending Decision Transformers (Chen et al., 2021) into multi-objective setting. We first introduce the architecture of Decision Transformers (DT) and its variant, Reinforcement Learning Via Supervised Learning (RvS), followed by our modifications extending them to the multi-objective setting.
DT casts offline RL as a conditional sequence modeling problem that predicts the next action by conditioning a transformer on past states, actions, and desired returns. The desired returns are defined as returns-to-go (RTG) gt “ řT t1“t rt1 , the future returns that this action is intended to achieve. Therefore, the trajectory is represented by τ “ă s1,a1, g1, . . . , sT ,aT , gT ą. In practice, we use a causally masked transformer architecture such as GPT (Radford et al., 2019) to process this sequence and predict the actions by observing the past K timesteps consisting of 3K tokens. DT and its variants have been shown to be more stable and robust to optimize due to the simplicity of loss function; easier to scale to more complex settings such as environments with high-dimensional actions or states, and agents with broad capabilities such as multitask settings (Lee et al., 2022). Hence, we adopt Decision Transformers (Chen et al., 2021) as the representative base algorithm on which we build our work.
In follow-up work, Emmons et al. (2021) extend DT and shows that even multi-layer perceptrons conditioned on the average returns-to-go can achieve similar performance without the use of transformers. They call their model Reinforcement Learning Via Supervised Learning (RvS). However, RvS is generally not very stable when conditioned on very large returns, unlike DT.
5.1 MULTI-OBJECTIVE REINFORCEMENT LEARNING VIA SUPERVISED LEARNING
In PEDA, our goal is to train a single preference-conditioned agent for offline MORL. By including preference conditioning, we enable the policy to be trained on arbitrary offline data, including trajectories collected from behavioral policies that are associated with alternate preferences. To parameterize our policy agents, we extend the DT and RvS architectures to include preference tokens and vector-valued returns. We refer to such preference-conditioned extensions of these architectures as MODT(P) and MORVS(P) respectively, which we describe next.
Preference Conditioning. Naively, we can easily incorporate the preference ω into DT by adding this token for each timestep and feeding it a separate embedding layer. However, empirically we find that such a model design tends to ignore ω and the correlation between the preferences and predicted actions is weak. Therefore, we propose to concatenate ω to other tokens before any layers in MODT(P). Concretely, we define s˚ “ s À ω, a˚ “ a À ω, and g˚ “ g À ω where À denotes the concatenation operator. Hence, triples of s˚, a˚, g˚ form the new trajectory. As for MORVS(P), we concatenate the preference with the states and the average RTGs by default and the network interprets everything as one single input.
Multi-Objective Returns-to-Go. Similar to RTG for the single objective case, we can define vector-valued RTG as gt “ řT t1“t rt1 Given a preference vector ω, we can scalarize the total returns-to-go as ĝt “ ωTgt. In principle, the scalarized RTG ĝt can be recovered given the preference vector ω and the vector-valued RTG gt. However, empirically we find that directly feeding MODT/MORVS with the preference-weighted RTG vector gt d ω is slightly preferable for stable training, where d denotes the elementwise product operator. Another unique challenge in the MORL setting concerns the scale of different objectives. Since different objectives can signify different physical quantities (e.g., energy and speed), the choice of scaling can influence policy optimization. We adopt a simple normalization scheme, where the returns for each objective are normalized by subtracting the minimum observed value for that objective and dividing it by the range of values (max-min). Note that the maximum and minimum are computed based on the offline dataset and hence, they are not necessarily the true min/max objective values. Post this normalization, the values for every objective in the trajectory are on the same scale between 0 and 1. For evaluating the hypervolume and sparsity, we use the unnormalized values so that we can make comparisons across different datasets that may have different min/max boundaries.
Training. We follow a simple supervised training procedure where we train the policies on randomly sampled mini-batches with MSE loss (for continuous actions). In MODT and MODT(P), the input states, actions, and returns-to-go (with concatenated preferences) are treated as tokens and embedded through one layer of MLP. We apply a layer of MLP and Tanh on the last hidden state of GPT-2 transformer to predict next action. In MORVS and MORVS(P), we use only information from the current timestep and MLP layers to predict the next action.
6 EXPERIMENTS
In this section, we evaluate the performance of PEDA on D4MORL benchmark. First, we investigate the benefits of preference conditioning by evaluating on decision transformers (DT) and RvS (MORVS) where no preference information is available and we scalarize multi-objective vector returns into weighted sums. We denote our methods with preference conditioning as MODT(P) and MORVS(P). Second, we compare our methods with classic imitation learning and temporal difference learning algorithms with preference conditioning.
Imitation learning. Imitation learning simply uses supervised loss to train a mapping from states (w/ or w/o concatenating preferences) to actions. We use behavioral cloning (BC) here and train multi-layer MLPs as models named BC (w/o preference) and BC(P) (w/ preference).
Temporal difference learning. Conservative Q-Learning (CQL) (Kumar et al., 2020) is the stateof-the-art standard offline RL method, which learns a conservative Q-function f : S ˆ A Ñ R through neural networks. We modify the network architecture such that it also takes preference vectors as inputs to learn a preference-conditioned Q-function f˚ : S ˆ A ˆ Ω Ñ R. We denote this method as CQL(P).
6.1 MULTI-OBJECTIVE OFFLINE BENCHMARK
Hypervolume. We compare hypervolume of our methods with all baselines on expert datasets in Table 1 as well as amateur dataset in Table 2. For the two-objective environments, we evaluate the
models on 501 equally spaced preference points in the range [0, 1]; on the three-objective environment MO-Hopper-3obj, models are evaluated on 325 equally spaced points. Each point is evaluated 5 times with random environment re-initialization, and the median value is recorded. Finally, all the results are based on 3 random seeds and we report the mean performance along with the standard error. In Table 1 and Table 2, we can see that MODT(P) and MORVS(P) outperform other baselines and has a relatively very low standard error. Also, PEDA variants including MODT(P) and MORVS(P) approaches the behavioral policy upper-bound.
Sparsity. We also evaluate sparsity performance. Since sparsity comparison is only meaningful between models that are sensitive to preference and have a relatively similar hypervolume performance, we only show results for models that concatenate preference. Overall, MORVS(P) has the lowest sparsity in most environments, while at the same time featuring an outstanding hypervolume.
6.2 ABLATION STUDY
Pareto front approximation. We ablate how well the MODT(P) and MORVS(P) can approximate the Pareto front through conditioning on different preference points. We show the results in Figure 4, where we can see that the models can approximate the Pareto front while having some dominated points colored in pink mostly in the MO-Hopper and MO-Walker2d environments. The results are based on the average of 3 seeds, and the full plot can be found in Appendix G.
Return distribution. We ablate how well MODT(P) and MORVS(P) follow their given target return, based on a normalized and weighted value. We present the results in Figure 5 for MORVS(P) under High-H-Expert datasets and refer to Appendix H for full settings. Here, we see that the models follow the oracle line nicely when conditioned on the target within the dataset distribution, and generalize to targets outside of the dataset distribution as well.
7 CONCLUSION
We proposed a new problem setup for offline Multi-Objective Reinforcement Learning to scale Pareto-Efficient decision-making using offline datasets. To characterize progress, we introduced D4MORL, a dataset benchmark consisting of offline datasets generated from behavioral policies of different fidelities (expert/amateur) and rolled out under preference distributions with varying entropies (high/medium/low). Then, we propose PEDA, a family of offline MORL policy optimization algorithms based on decision transformers. To our knowledge, the PEDA variants are the first offline MORL policies that support continuous action and preference spaces. We showed that by concatenating and embedding preferences together with other inputs, our policies can effectively approximate the Pareto front of the underlying behavioral policy as measured by the hypervolume and sparsity metrics. Our proposed family includes MLP and transformer-based variants, viz. the MORVS(P) and MODT(P), with MORVS(P) performing the best overall. In some scenarios, the learned policies can also generalize to higher target rewards that exceed the data distribution.
REPRODUCIBILITY STATEMENT
Our code is available at: https://github.com/baitingzbt/PEDA.
ACKNOWLEDGEMENTS
AG’s research is supported by a Meta Research Award and a Cisco grant.
A ENVIRONMENT DESCRIPTION
All environments are the same as in Xu et al., 2020, except for when resetting the environment, each parameter is uniformly sampled from the rx´ 10´3, x` 10´3s with x being the default value. Except we always reset height as 1.25 for MO-Hopper and MO-Hopper-3objsince this parameter directly relates to the reward function. All environments have a max episode length of 500 steps per trajectory, but the agent may also die before reaching the maximum length.
A.1 MO-ANT
The two objectives in MO-Ant are achieved distance in x and y axes respectively, denoted as r “ rrvxt , r vy t s⊺.
Consider the position of the agent is represented as pxt, ytq at time t and takes the action at. The agent has a fixed survival reward rs “ 1.0, dt “ 0.05, and an action cost of ra “ 12 ř k a 2 k. The rewards are calculated as:
rvxt “ pxt ´ xt´1q { dt ` rs ´ ra rvyt “ pyt ´ yt´1q { dt ` rs ´ ra (1)
A.2 MO-HALFCHEETAH
The two objectives in MO-HalfCheetah are running speed, and energy saving, denoted as r “ rrvt , ret s⊺. Consider the position of the agent is represented as pxt, ytq at time t and takes the action at. The agent has a fixed survival reward rs “ 1.0, fixed dt “ 0.05, and an action cost of ra “ ř
k a 2 k. The
rewards are calculated as:
rvt “ mint4.0, pxt ´ xt´1q { dtu ` rs ret “ 4.0 ´ ra ` rs (2)
A.3 MO-HOPPER
The two objectives in MO-Hopper are running and jumping, denoted as r “ rrr, rjs⊺. Consider the position of the agent is represented as pxt, htq at time t and takes the action at. The agent has a fixed survival reward rs “ 1.0, a fixed initial height as hinit “ 1.25, a fixed dt “ 0.01, and an action cost of ra “ 2 ˆ 10´4 ř
k a 2 k. The rewards are calculated as:
rrt “ 1.5 ˆ pxt ´ xt´1q { dt ` rs ´ ra rjt “ 12 ˆ pht ´ hinitq { dt ` rs ´ ra (3)
A.4 MO-HOPPER-3OBJ
The physical dynamics are the same in MO-Hopper and MO-Hopper-3obj, while this environment has 3 objectives: running, jumping, and energy saving. The rewards are denoted as r “ rrr, rj , res⊺. Consider the position of the agent is represented as pxt, htq at time t and takes the action at. The agent has a fixed survival reward rs “ 1.0, a fixed initial height as hinit “ 1.25, a fixed dt “ 0.01, and an action cost of ra “ ř
k a 2 k. The rewards are calculated as:
rrt “ 1.5 ˆ pxt ´ xt´1q { dt ` rs
rjt “ 12 ˆ pht ´ hinitq { dt ` rs ret “ 4.0 ´ ra ` rs (4)
A.5 MO-SWIMMER
The two objectives in MO-Swimmer are speed and energy saving, denoted as r “ rrv, res⊺. Consider the position of the agent is represented as pxt, ytq at time t and takes the action at. The agent has a fixed dt “ 0.05, and an action cost of ra “ ř
k a 2 k. The rewards are calculated as:
rvt “ pxt ´ xt´1q { dt ret “ 0.3 ´ 0.15 ˆ ra
(5)
A.6 MO-WALKER2D
The objectives in MO-Walker2d are speed and energy saving, denoted as r “ rrv, res⊺. Consider the position of the agent is represented as pxt, ytq at time t and takes the action at. The agent has a fixed survival reward rs “ 1.0, a fixed dt “ 0.008, and an action cost of ra “ ř
k a 2 k.
The rewards are calculated as:
rvt “ pxt ´ xt´1q { dt ` rs ret “ 4.0 ´ ra ` rs (6)
To uniformly sample the High-H data from the entire preference space, the problem is equivalent to sampling from a n-dimensional simplex, where n is the number of objectives. The resulting sampling is:
ωhigh „ ||fexpp ¨ , λ “ 1q||1 (7)
We take the 1-norm following the exponential distribution to make sure each preference add up to 1. When Ω˚ ‰ Ω, we perform rejection sampling to restrict the range. To sample the Med-H and Low-H data, we first sample α from a non-negative uniform distribution, then sample the corresponding Dirichlet preference. Here, we sample a different alpha to make sure the center of the Dirichlet changes and thus allows more variation.
ωmed „ fDirichletpαq ; where α „ Unifp0, 106q ωlow „ fDirichletpαq ; where α „ Unifp1{3 ˆ 106, 2{3 ˆ 106q
(8)
For sampling from behavioral policy consists of a group of single-objective policies πβ “ tπ1, . . . , πBu with B being the total number of candidate policies, we recommend first find the expected unweighted raw rewards Gπ1 , . . . ,GπB . Then, find the estimated ω̂π1 , . . . , ω̂πB by letting ω̂πbi “ G πb i
řn j“1 G πb j
, which represents the estimated preference on ith objective of bth candidate
policy. For a sampled preference ω „ Ω˚, use the policy that provides the smallest euclidean distance dpω, ω̂πbq. Empirically, this means picking the candidate policy that has the expected reward ratio closest to ω.
B DATASET DETAILS
To uniformly sample the High-H data from the entire preference space, we can equivalently sample from a n-dimensional simplex, where n is the number of objectives. The resulting sampling scheme is:
ωhigh „ ||fexpp ¨ , λ “ 1q||1 (9)
The 1-norm following the exponential distribution makes sure each preference vector have entries add up to 1. When Ω˚ ‰ Ω, we perform rejection sampling to restrict the range.
To sample the Med-H and Low-H data, we first sample α from a non-negative uniform distribution, then sample the corresponding Dirichlet preference. Here, we sample a different alpha to make sure the mode of the Dirichlet changes and thus allows more variation.
ωmed „ fDirichletpαq ; where α „ Unifp0, 106q ωlow „ fDirichletpαq ; where α „ Unifp1{3 ˆ 106, 2{3 ˆ 106q
(10)
Since our behavioral policy is consists of a group of single-objective policies πβ “ tπ1, . . . , πBu with B being the total number of candidate policies, we first find the expected unweighted raw rewards Gπ1 , . . . ,GπB . Then, we find the estimated preferences ω̂π1 , . . . , ω̂πB by letting ω̂πbi “
G πb i
řn j“1 G πb j on ith objective of bth candidate policy. For each sampled preference ω „ Ω˚ following (9) or (10), we sample a complete trajectory using the single-objective behavioral policy that provides the smallest euclidean distance min dpω, ω̂πbq. Empirically, this means picking the candidate policy that has the expected reward ratio closest to ω.
C EXPERT & AMATEUR DATASETS
In Expert collection, we sample trajectories using the fully-trained behavioral policy πβ . In this paper, we use PGMORL by Xu et al., 2020 as our behavioral policy πβ
aexpertt`1 “ πβpa|s “ st,ω “ ωtq (11)
In the Amateur collection, the policies has a 35% chance being stochastic on top of the expert collection. Actions has a chance being stochastic, during which it is scaled from the expert action, as following:
aamateurt`1 “ " aexpertt`1 35% aexpertt`1 ˆ Unifp0.35, 1.65q 65%
(12)
In the MO-Swimmer environment only, we let actions has a 35% chance to be a uniform random sample from the entire action space rather than being the same as expert to increase variance and achieve a performance similar to amateur. The resulting strategy for MO-Swimmer is:
aamateurt`1 “ " UnifpAq 35% aexpertt`1 ˆ Unifp0.35, 1.65q 65%
(13)
D FINDING APPROPRIATE MULTI-OBJECTIVE RTG
In Decision Transformer Chen et al. (2021) and Emmons et al. (2021), RTG denotes the future desired reward. In MORL, however, designing appropriate multi-objective RTG is necessary. On top
of discounting each objective’s desired reward separately, we empirically find that since some objectives are inherently conflicting, setting RTG high for one objective means we should accordingly lower RTG for other objectives (i.e. we shouldn’t use maximum RTG for both). In this way, our test-time RTG can follow closer to the training distribution.
In this paper, we use linear regression G “ fpωq to find corresponding RTG conditioned on the given preference. Figure 6 demonstrates the weighted RTG of the “running” objective as a function of its preference in MO-Hopper where the conflicting objectives are “running” and “jumping”. It is clear that RTG closely correlates with the conditioned preference for running and we should adjust the initial RTG during test-time accordingly.
Finally, we only use the learned linear regression model from Expert dataset. This is because regression models fitted on sub-optimal data can easily produce an RTG lower than optimal. In practice, we can easily achieve a similar result by training the regression model only on the best-performing trajectories for respective preferences. Other regression or clustering methods to find appropriate RTG can also work, and we leave it as future works especially when not assuming linearly weighted objectives.
E TRAINING DETAILS
In this section, we list our hyper-parameters and model details. In specific, we use the same hyperparameters for all algorithms, except for the learning rate scheduler and warm-up steps. In MODT family, inputs are first embedded by a 1-layer fully-connected network, and n layer represents the number of transformer blocks; in BC family, n layer represents the number of MLP layers to embed each input; in MORVS and MORVS(P), we leverage the same embedding strategy in Emmons et al. (2021). Additionally, we consider MORVS and MORVS(P) both have context length of 1 because they only use the current state to predict the next action, whereas MODT and BC use the past 20.
E.1 PARAMETERS
Hyperparameter MODT MORvS BC
Context Length - K 20 1 20 Batch Size 64
Hidden Size 512 Learning Rate 1e-4 Weight Decay 1e-3
Dropout 0.1 n layer 3
Optimizer AdamW Loss Function MSE LR Scheduler lambda None lambda
Warm-up Steps 10000 N/A 4000 Activation ReLU
E.2 TRAINING STEPS
Dataset Name MODT Steps RvS/BC Steps
MO-Ant 20K 200K MO-HalfCheetah 80K 200K
MO-Hopper 400K 200K MO-Hopper-3obj 400K 200K
MO-Swimmer 260K 200K MO-Walker2d 360K 200K
E.3 OTHER ATTEMPTED ARCHITECTURES FOR MODT AND MODT(P)
We tried the following MODT architectures in our preliminary experiments. We picked Case 4 eventually as it gave the best performance in our experiments.
1. Consider ω as an independent token of the transformer.
2. Train a separate embedding for ω, concatenate the embeddings to get fϕspsq À fϕω pωq, fϕapaq À fϕω pωq, and fϕg pgq À
fϕω pωq then pass into the transformer. 3. Add another MLP layer on top of Case 2 after concatenation, then pass output into the
transformer.
4. Concatenate ω to other tokens before any layers. This means we have s˚ “ s À
ω, a˚ “ a À ω, and g˚ “ g À ω.
F OTHER EVALUATION METRICS
Among a variety of metrics for MORL, we use Hypervolume (HV) and Sparsity (SP) to benchmark models for several reasons. First, many metrics such as the ϵ-metric require prior knowledge of the true Pareto Fronts, which are not available for our MuJoCo Environments. Second, we only assume linear reward function and cannot collect real-time user feedback, thus utility-based metrics such as expected utility metric (EUM) are not applicable. Finally, using the same metric as in the original behavioral policy paper facilitate algorithm comparisons.
G PARETO SET VISUALIZATIONS
We present the Pareto Set visualizations for all of our models trained under each High-H dataset in Figure 7. Each point in each subplot is based on the average result of 3 seeds. In 2 objective environments, we evaluate the model using 501 equally spaced preference points. In 3 objective environments, we use 351 equally spaced preference points instead. Since the environments are stochastically initialized, we evaluate 5 times at each preference point and take the mean value. This makes each point the average value of 15 runs. We here allow a small tolerance for coloring the dominated points.
If a preference point is within the achievable preference Ω˚ but the solution is dominated, we color it in red. Since our models are conditioned on continuous preference points and environments are initialized stochastically, we give a small tolerance (3%-8%) for points to be colored in blue. The hypervolume and sparsity metrics, on the other hand, are based on strictly undominated solutions without tolerance.
H MEDIUM & LOW ENTROPY DATASET TRAINING
We train on the Medium-Entropy and Low-Entropy datasets for the MO-HalfCheetah environment. Overall, models have a similar performance under Med-H and High-H datasets but suffer when only trained on Low-H. We present the results in Table 6, in which we illustrate that the Low-H dataset has a worse expert and amateur performance due to reduced variability on preference. However, MODT(P) and MORVS(P) are still able to get close to or exceed in hypervolume on all
datasets, which showcases the effectiveness of PEDA as an efficient MORL policy. Results are based on an average of 3 seeds with the standard error given.
I TRAINING WITH 1-DIM RTG
We attempted to train MODT and RvS with 1-dim return-to-go rather than a separate rtg for each objective. According to results on MO-HalfCheetah and the High-H datasets in 7, using multidimensional RTG enhances the performance of MODT(P), and are about the same for MORVS(P) when preference is concatenated to states. However, it reduces standard error significantly in both MODT(P) and MORVS(P). In the naive models when preferences are not concatenated to states, using a multi-dimensional RTG helps to achieve a much more competitive hypervolume. We thus believe multi-dimensional RTG conveys important preference information when the model doesn’t directly take preference as an input. Results are based on an average of 3 seeds with the standard error given. | 1. What is the focus and contribution of the paper regarding multi-objective offline learning?
2. What are the strengths of the proposed approach, particularly in terms of adapting decision transformers and RL to the multi-objective setting?
3. What are the weaknesses of the paper, especially regarding the proposed extension and results?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper proposes a reference benchmark for multi-objective offline learning data set which is built on a formerly proposed set of benchmark environments for multi-objective RL consisting of 6 MuCojo environments. In addition, the authors adapt decision transformers and the RL via Supervised learning method to the multi-objective setting. This is achieved by concatenating the preference vector to state, action and return. Furthermore, the return to go is modified to represent the scalarized return. In the experiments, the authors compare to behavioural cloning and conservative Q-learning which is modified to receive the input as preference as well. Results indicate that the supervised learning-based offline-RL approaches outperform compared methods.
Strengths And Weaknesses
Strength:
The paper is easy to follow.
The paper examines the questions of how well decision transformers (and RLvSL) can be adapted to the multi-objective setting
Multi-objective offline RL was not examined on Mucojo environments before.
Weaknesses:
The proposed extension of the offline RL methods is relatively straightforward.
Results are not too surprising.
The novel data set is helpful, but given that the online benchmark was already available it is not too difficult to sample an offline data set.
Clarity, Quality, Novelty And Reproducibility
The paper is well-written and mostly easy to follow. Though multi-objective RL wasn't examined in the offline setting on the given environments before, the paper's novelty is limited. The technical contribution consists of feeding the preference as an extra input token to the method. The paper provides a variety of information, and thus, I expect the reproducibility is quite high. |
ICLR | Title
Scaling Pareto-Efficient Decision Making via Offline Multi-Objective RL
Abstract
The goal of multi-objective reinforcement learning (MORL) is to learn policies that simultaneously optimize multiple competing objectives. In practice, an agent’s preferences over the objectives may not be known apriori, and hence, we require policies that can generalize to arbitrary preferences at test time. In this work, we propose a new data-driven setup for offline MORL, where we wish to learn a preference-agnostic policy agent using only a finite dataset of offline demonstrations of other agents and their preferences. The key contributions of this work are two-fold. First, we introduce D4MORL, (D)datasets for MORL that are specifically designed for offline settings. It contains 1.8 million annotated demonstrations obtained by rolling out reference policies that optimize for randomly sampled preferences on 6 MuJoCo environments with 2-3 objectives each. Second, we propose Pareto-Efficient Decision Agents (PEDA), a family of offline MORL algorithms that builds and extends return-conditioned offline methods including Decision Transformers (Chen et al., 2021) and RvS (Emmons et al., 2021) via a novel preference-and-return conditioned policy. Empirically, we show that PEDA closely approximates the behavioral policy on the D4MORL benchmark and provides an excellent approximation of the Pareto-front with appropriate conditioning, as measured by the hypervolume and sparsity metrics.
1 INTRODUCTION
We are interested in learning agents for multi-objective reinforcement learning (MORL) that optimize for one or more competing objectives. This setting is commonly observed in many real-world scenarios. For instance, an autonomous driving car might trade off high speed and energy savings depending on the user’s preferences. If the user has a relatively high preference for speed, the agent will move fast regardless of power usage; on the other hand, if the user tries to save energy, the agent will keep a more steady speed. One key challenge with MORL is that different users might have different preferences on the objectives and systematically exploring policies for each preference might be expensive, or even impossible. In the online setting, prior work considers several approximations based on scalarizing the vector-valued rewards of different objectives based on a single preference (Lin, 2005), learning an ensemble of policies based on enumerating preferences (Mossalam et al., 2016, Xu et al., 2020), or extensions of single-objective algorithms such as Q-learning to vectorized value functions (Yang et al., 2019).
We introduce the setting of offline multi-objective reinforcement learning for high-dimensional state and action spaces, where our goal is to train an MORL policy agent using an offline dataset of demonstrations from multiple agents with known preferences. Similar to the single-task setting, offline MORL can utilize auxiliary logged datasets to minimize interactions, thus improving data efficiency and minimizing interactions when deploying agents in high-risk settings. In addition to its practical utility, offline RL (Levine et al., 2020) has enjoyed major successes in the last few years (Kumar et al., 2020, Kostrikov et al., 2021, Chen et al., 2021) on challenging high-dimensional environments for continuous control and game-playing. Our contributions in this work are two-fold in introducing benchmarking datasets and a new family of MORL, as described below.
We introduce Datasets for Multi-Objective Reinforcement Learning (D4MORL), a collection of 1.8 million trajectories on 6 multi-objective MuJoCo environments (Xu et al., 2020). Here, 5 environ-
ments consist of 2 objectives and 1 environment consists of 3 objectives. For each environment in D4MORL, we collect demonstrations from 2 pretrained behavioral agents: expert and amateur, where the relative expertise is defined in terms of the Pareto-efficiency of the agents and measured empirically via their hypervolumes. Furthermore, we also include 3 kinds of preference distributions with varying entropies to expose additional data-centric aspects for downstream benchmarking. Lack of MORL datasets and large-scale benchmarking has been a major challenge for basic research (Hayes et al., 2022), and we hope that D4MORL can aid future research in the field.
Next, we propose Pareto-Efficient Decision Agents (PEDA), a family of offline MORL algorithms that extends return-conditioned methods including Decision Transformer (DT) (Chen et al., 2021) and RvS (Emmons et al., 2021) to the multi-objective setting. These methods learn a returnconditioned policy via a supervised loss on the predicted actions. In recent work, these methods have successfully scaled to agents that demonstrate broad capabilities in multi-task settings (Lee et al., 2022 Reed et al., 2022). For MORL, we introduce a novel preference and return conditioned policy network and train it via a supervised learning loss. At test time, naively conditioning on the default preferences and maximum possible returns leads to out-of-distribution behavior for the model, as neither has it seen maximum returns for all objectives in the training data nor is it possible to simultaneously maximize all objectives under competition. We address this issue by learning to map preferences to appropriate returns and hence, enabling predictable generalization at test-time.
Empirically, we find PEDA performs exceedingly well on D4MORL and closely approximates the reference Pareto-frontier of the behavioral policy used for data generation. In the multi-objective HalfCheetah environment, compared with an average upper bound on the hypervolume of 5.79ˆ106 achieved by the behavioral policy, PEDA achieves an average hypervolume of 5.77 ˆ 106 on the Expert and 5.76 ˆ 106 on the Amateur datasets.
2 RELATED WORK
Multi-Objective Reinforcement Learning Predominant works in MORL focus on the online setting where the goal is to train agents that can generalize to arbitrary preferences. This can be achieved by training a single preference-conditioned policy (Yang et al., 2019; Parisi et al., 2016), or an ensemble of single-objective policies for a finite set of preferences (Mossalam et al., 2016; Xu et al., 2020; Zhang & Li, 2007). Many of these algorithms consider vectorized variants of standard algorithms such as Q-learning (Mossalam et al., 2016; Yang et al., 2019), often augmented with strategies to guide the policy ensemble towards the Pareto front using evolutionary or incrementally updated algorithms (Xu et al., 2020; Zhang & Li, 2007; Mossalam et al., 2016; Roijers et al., 2014; Huang et al., 2022). Other approaches have also been studied, such as framing MORL as a meta-learning problem (Chen et al., 2019), learning the action distribution for each objective (Abdolmaleki et al., 2020), and learning the relationship between objectives (Zhan & Cao, 2019) among others. In contrast to these online MORL works, our focus is on learning a single policy that works for all preferences using only offline datasets.
There are also a few works that study decision-making with multiple objectives in the offline setting and sidestep any interaction with the environments. Wu et al., 2021 propose a provably efficient offline MORL algorithm for tabular MDPs based on dual gradient ascent. Thomas et al., 2021 study learning of safe policies by extending the approach of Laroche et al., 2019 to the offline MORL setting. Their proposed algorithm assumes knowledge of the behavioral policy used to collect the offline data and is demonstrated primarily on tabular MDPs with finite state and action spaces. In contrast, we are interested in developing dataset benchmarks and algorithms for scalable offline policy optimization in high-dimensional MDPs with continuous states and actions.
Multi-Task Reinforcement Learning MORL is also closely related to multi-task reinforcement learning, where every task can be interpreted as a distinct objective. There is an extensive body of work in learning multi-task policies both in the online and offline setups (Wilson et al., 2007; Lazaric & Ghavamzadeh, 2010; Teh et al., 2017) inter alia. However, the key difference is that typical MTRL benchmarks and algorithms do not consider solving multiple tasks that involve inherent trade-offs. Consequently, there is no notion of Pareto efficiency and an agent can simultaneously excel in all the tasks without accounting for user preferences.
Reinforcement Learning Via Supervised Learning A body of recent works have formulated offline reinforcement learning as an autoregressive sequence modeling problem using Decision Transformers (DT) or Trajectory Transformers ( Chen et al., 2021, Janner et al., 2021) The key idea in DT is to learn a transformer-based policy that conditions on the past history and a dynamic estimate of the returns (a.k.a. returns-to-go). Follow-up works consider online learning (Zheng et al., 2022) as well as simpler variants that rely only on multi-layer perceptrons (Emmons et al., 2021). Such agents are generally more stable and robust to optimize due to the simplicity of loss function and easier to scale to more complex settings such as environments with high-dimensional actions or states, as shown in recent works in multi-task RL (Lee et al., 2022; Reed et al., 2022).
3 PRELIMINARIES
Setup and Notation. We operate in the general framework of a multi-objective Markov decision process (MOMDP) with linear preferences (Wakuta, 1995). An MOMDP is represented by the tuple xS,A,P,R,Ω, f, γy. At each timestep t, the agent with a current state st P S takes an action at P A to transition into a new state st`1 with probability Ppst`1|st,atq and observes a reward vector rt “ Rpst,atq P Rn. Here, n is the number of objectives. The vector-valued return R P Rn of an agent is given by the discounted sum of reward vectors over a time horizon, R “ ř
t γ trt. We
also assume that there exists a linear utility function f and a space of preferences Ω that can map the reward vector rt and a preference vector ω P Ω to a scalar reward rt, i.e., rt “ fprt,ωq “ ω⊺rt. The expected vector return of a policy π is given an Gπ “ rGπ1 , Gπ2 , . . . , Gπns⊺ where the expected return of the ith objective is given as Gπi “ Eat`1„πp¨|st,ωqr ř
t Rpst,atqis for some predefined time horizon and preference vector ω. The goal is to train a multi-objective policy πpa|s,ωq such that the expected scalarized return ω⊺ Gπ “ Erω⊺ ř
t Rpst,atqs is maximized.
Pareto Optimality. In MORL, one cannot optimize all objectives simultaneously, so policies are evaluated based on the Pareto set of their vector-valued expected returns. Consider a preference-conditioned policy πpa|s,ωq that is evaluated for m distinct preferences ω1, . . . ,ωm, and let the resulting policy set be represented as tπpup“1,...,m, where πp “ πpa|s,ω “ ωpq, and Gπp is the corresponding unweighted expected return. We say the solution Gπp is dominated by Gπq when there is no objective for which πq is worse than πp, i.e., G πp i ă G πq i for @i P r1, 2, . . . , ns. If a solution is not dominated, it is part of the Pareto set denoted as P . The curve traced by the solutions in a Pareto set is also known as the Pareto front. In MORL, our goal is to define a policy such that its empirical Pareto set is a good approximation of the true Pareto front. While we do not know the true Pareto front for many problems, we can define metrics for relative comparisons between different algorithms. Specifically, we evaluate a Pareto set P based on two metrics, hypervolume and sparsity that we describe next.
Definition 1 (Hypervolume). Hypervolume HpP q measures the space or volume enclosed by the solutions in the Pareto set P :
HpP q “ ż
Rm 1HpP qpzq dz,
where HpP q “ tz P Z|Di : 1 ď i ď |P |, r ĺ z ĺ P piqu. P piq is the ith solution in P , ĺ is the dominance relation operator, and 1HpP qpzq equals 1 if z P HpP q and 0 otherwise. Higher hypervolumes are better. Definition 2 (Sparsity). Sparsity SpP q measures the density of the Pareto front covered by a Pareto set P :
SpP q “ 1|P | ´ 1
n ÿ
i“1
|P |´1 ÿ
k“1 pP̃ipkq ´ P̃ipk ` 1qq2,
where P̃i represents a list sorted as per the values of the ith objective in P and P̃ipkq is the kth value in the sorted list. Lower sparsity is better.
See Figure 1 for an illustration and Appendix F for discussion on other possible metrics.
4 D4MORL: DATASETS FOR OFFLINE MULTI-OBJECTIVE REINFORCEMENT LEARNING
In offline RL, the goal of an RL agent is to learn the optimal policy using a fixed dataset without any interactions with the environment (Levine et al., 2020). This perspective brings RL closer to supervised learning, where the presence of large-scale datasets has been foundational for further progress in the field. Many such data benchmarks exist for offline RL as well; a notable one is the D4RL (Fu et al., 2020) benchmark for continuous control which has led to the development of several state-of-the-art offline RL algorithms (Kostrikov et al., 2021; Kumar et al., 2020; Chen et al., 2021) that can scale favorably even in high dimensions. To the best of our knowledge, there are no such existing benchmarks for offline MORL. Even for the online setting, most works in MORL conduct evaluations on toy MDPs (e.g., gridworld) with a few exceptions that include continuous control, e.g., Chen et al. (2019); Xu et al. (2020). This calls for a much-needed push towards more challenging benchmarks for reliable evaluation of MORL, especially in the offline setting.
We introduce Datasets for Multi-Objective Reinforcement Learning (D4MORL), a large-scale benchmark for offline MORL. Our benchmark consists of offline trajectories from 6 multiobjective MuJoCo environments including 5 environments with 2 objectives each (MO-Ant, MOHalfCheetah, MO-Hopper, MO-Swimmer, MO-Walker2d), and one environment with three objectives (MO-Hopper-3obj). The objectives are conflicting for each environment; for instance, the two objectives in MO-Hopper correspond to jumping and running; in MO-HalfCheetah, MO-Swimmer, and MO-Walker2d, they correspond to the speed and energy savings of the agent. See Appendix A for more details on the semantics of the target objectives for each environment. These environments were first introduced in Xu et al. (2020) for online MORL, and as such, we use their pretrained ensemble policies as building blocks for defining new behavioral policies for dataset collection, which we discuss next.
4.1 TRAJECTORY SAMPLING
The quality of the behavioral policy used for sampling trajectories in the offline dataset is a key factor for benchmarking downstream offline RL algorithms. In existing benchmarks for single-objective RL such as D4RL (Fu et al., 2020), the quality of a behavioral policy can be ascertained and varied based on its closeness to a single expert policy, as measured by its scalar-valued returns. For a MOMDP, we do not have the notion of a scalar return and hence, a reference expert policy (or set of policies) should reflect the optimal returns for all possible preferences in the preference space.
We use Prediction-Guided Multi-Objective Reinforcement Learning (PGMORL), a state-of-the-art MORL algorithm for defining reference expert policies. PGMORL (Xu et al., 2020) uses evolutionary algorithms to train an ensemble of policies to approximate the Pareto set. Each reference policy in the ensemble is associated with a unique preference; as for any new preference, it is mapped to the closest preference in the reference set. The number of policies in the ensemble can vary significantly; for instance, we have roughly 70 reference policies for MO-Antand 2445 policies for harder environments such as MO-Hopper-3obj. Given a desired preference, we define two sets of behavioral policies:
1. Expert Dataset: We find the best reference policy in the policy ensemble, and always follow the action taken by the selected reference policy.
2. Amateur Dataset: As before, we first find the best reference policy in the policy ensemble. With a fixed probability p, we randomly perturb the actions of the reference policies. Otherwise, with probability 1 ´ p, we take the same action as the reference policy. In D4MORL, we set p “ 0.65.
Further details are described in Appendix C. In Figure 2, we show the returns of the trajectories rolled out from the expert and amateur policies for the 2 objective environments evaluated for a uniform sampling of preferences. We can see that the expert trajectories typically dominate the amateur trajectories, as desired. For the amateur trajectories, we see more diversity in the empirical returns for both objectives under consideration. The return patterns for the amateur trajectories vary across different environments providing a diverse suite of datasets in our benchmark.
4.2 PREFERENCE SAMPLING
The coverage of any offline dataset is an important factor in dictating the performance of downstream offline RL algorithms (Levine et al., 2020). For MORL, the coverage depends on both the behavioral MORL policy as well as the distribution of preferences over which this policy is evaluated. We use the following protocols for sampling from the preference space Ω. First, we restrict our samples to lie within a physically plausible preference space Ω˚ Ď Ω covered by the behavioral policy πβ . For instance, MO-Hopper has two objectives: jumping and running. Since the agent can never gain running rewards without leaving the floor. Thus, the preference of 100% running and 0% jumping is not achievable and excluded from our preference sampling distribution.
Second, we are primarily interested in offline trajectories that emphasize competition between multiple objectives rather than focusing on a singular objective. To enforce this criterion, we define 3 sampling distributions concentrated around the centroid of the preference simplex. The largest spread distribution samples uniformly from Ω˚ and is denoted as High-Entropy (High-H). Next, we have a Medium-Entropy (Med-H) distribution specified via samples of Dirichlet distributions with large values of their concentration hyperparameters (aka α). Finally, we have a Low-Entropy (Low-H) distribution that is again specified via samples of Dirichlet distributions but with low values of their concentration hyperparameters. We illustrate the samples for each of the preference distributions along with their empirical entropies in Figure 3. Further details on the sampling distributions are deferred to Appendix B. By ensuring different levels of coverage, we can test the generalizability of an MORL policy to preferences unseen during training. In general, we expect Low-H to be the hardest of the three distributions due to its restricted coverage, followed by Med-H and High-H.
Overall Data Generation Pipeline. The pseudocode for generating the dataset is described in Algorithm 1. Given a preference distribution, we first sample a preference ω and query the closest behavioral policy in either the amateur/expert ensemble matching ω. We roll out this policy for T time steps (or until the end of an episode if sooner) and record the state, action, and reward information. Each trajectory in our dataset is represented as:
τ “ă ω, s1,a1, r1, . . . , sT ,aT , rT ą
Algorithm 1 Data Collection in D4MORL procedure COLLECT(prefDist, nTraj, env, pretrainedAgents, T)
agents = pretrainedAgents prefs = prefDist(nTraj) all trajs = [] for ω in prefs do
agent = closestAgent(agents, ω) s = env.reset() done = False τ = [ω] t = 0 while (NOT done) AND (t ă T) do
a = agent.get action(s) s1, done, r = env.step(a) append s, a, s1, r to τ s = s1 t = t + 1
append τ to all trajs return all trajs
For every environment in D4MORL, we collect 50K trajectories of length T “ 500 for both expert and amateur trajectory distributions under each of the 3 preference distributions. Overall, this results in a total of 1.8M trajectories over all 6 environments, which corresponds to roughly 867M time steps. We refer the reader to Table 5 in Appendix B for additional statistics on the dataset.
5 PARETO-EFFICIENT DECISION AGENTS (PEDA)
In this section, we propose Pareto-Efficient Decision Agents (PEDA), a new family of offline multi-objective RL agents. PEDA aims to achieve Pareto-efficiency by extending Decision Transformers (Chen et al., 2021) into multi-objective setting. We first introduce the architecture of Decision Transformers (DT) and its variant, Reinforcement Learning Via Supervised Learning (RvS), followed by our modifications extending them to the multi-objective setting.
DT casts offline RL as a conditional sequence modeling problem that predicts the next action by conditioning a transformer on past states, actions, and desired returns. The desired returns are defined as returns-to-go (RTG) gt “ řT t1“t rt1 , the future returns that this action is intended to achieve. Therefore, the trajectory is represented by τ “ă s1,a1, g1, . . . , sT ,aT , gT ą. In practice, we use a causally masked transformer architecture such as GPT (Radford et al., 2019) to process this sequence and predict the actions by observing the past K timesteps consisting of 3K tokens. DT and its variants have been shown to be more stable and robust to optimize due to the simplicity of loss function; easier to scale to more complex settings such as environments with high-dimensional actions or states, and agents with broad capabilities such as multitask settings (Lee et al., 2022). Hence, we adopt Decision Transformers (Chen et al., 2021) as the representative base algorithm on which we build our work.
In follow-up work, Emmons et al. (2021) extend DT and shows that even multi-layer perceptrons conditioned on the average returns-to-go can achieve similar performance without the use of transformers. They call their model Reinforcement Learning Via Supervised Learning (RvS). However, RvS is generally not very stable when conditioned on very large returns, unlike DT.
5.1 MULTI-OBJECTIVE REINFORCEMENT LEARNING VIA SUPERVISED LEARNING
In PEDA, our goal is to train a single preference-conditioned agent for offline MORL. By including preference conditioning, we enable the policy to be trained on arbitrary offline data, including trajectories collected from behavioral policies that are associated with alternate preferences. To parameterize our policy agents, we extend the DT and RvS architectures to include preference tokens and vector-valued returns. We refer to such preference-conditioned extensions of these architectures as MODT(P) and MORVS(P) respectively, which we describe next.
Preference Conditioning. Naively, we can easily incorporate the preference ω into DT by adding this token for each timestep and feeding it a separate embedding layer. However, empirically we find that such a model design tends to ignore ω and the correlation between the preferences and predicted actions is weak. Therefore, we propose to concatenate ω to other tokens before any layers in MODT(P). Concretely, we define s˚ “ s À ω, a˚ “ a À ω, and g˚ “ g À ω where À denotes the concatenation operator. Hence, triples of s˚, a˚, g˚ form the new trajectory. As for MORVS(P), we concatenate the preference with the states and the average RTGs by default and the network interprets everything as one single input.
Multi-Objective Returns-to-Go. Similar to RTG for the single objective case, we can define vector-valued RTG as gt “ řT t1“t rt1 Given a preference vector ω, we can scalarize the total returns-to-go as ĝt “ ωTgt. In principle, the scalarized RTG ĝt can be recovered given the preference vector ω and the vector-valued RTG gt. However, empirically we find that directly feeding MODT/MORVS with the preference-weighted RTG vector gt d ω is slightly preferable for stable training, where d denotes the elementwise product operator. Another unique challenge in the MORL setting concerns the scale of different objectives. Since different objectives can signify different physical quantities (e.g., energy and speed), the choice of scaling can influence policy optimization. We adopt a simple normalization scheme, where the returns for each objective are normalized by subtracting the minimum observed value for that objective and dividing it by the range of values (max-min). Note that the maximum and minimum are computed based on the offline dataset and hence, they are not necessarily the true min/max objective values. Post this normalization, the values for every objective in the trajectory are on the same scale between 0 and 1. For evaluating the hypervolume and sparsity, we use the unnormalized values so that we can make comparisons across different datasets that may have different min/max boundaries.
Training. We follow a simple supervised training procedure where we train the policies on randomly sampled mini-batches with MSE loss (for continuous actions). In MODT and MODT(P), the input states, actions, and returns-to-go (with concatenated preferences) are treated as tokens and embedded through one layer of MLP. We apply a layer of MLP and Tanh on the last hidden state of GPT-2 transformer to predict next action. In MORVS and MORVS(P), we use only information from the current timestep and MLP layers to predict the next action.
6 EXPERIMENTS
In this section, we evaluate the performance of PEDA on D4MORL benchmark. First, we investigate the benefits of preference conditioning by evaluating on decision transformers (DT) and RvS (MORVS) where no preference information is available and we scalarize multi-objective vector returns into weighted sums. We denote our methods with preference conditioning as MODT(P) and MORVS(P). Second, we compare our methods with classic imitation learning and temporal difference learning algorithms with preference conditioning.
Imitation learning. Imitation learning simply uses supervised loss to train a mapping from states (w/ or w/o concatenating preferences) to actions. We use behavioral cloning (BC) here and train multi-layer MLPs as models named BC (w/o preference) and BC(P) (w/ preference).
Temporal difference learning. Conservative Q-Learning (CQL) (Kumar et al., 2020) is the stateof-the-art standard offline RL method, which learns a conservative Q-function f : S ˆ A Ñ R through neural networks. We modify the network architecture such that it also takes preference vectors as inputs to learn a preference-conditioned Q-function f˚ : S ˆ A ˆ Ω Ñ R. We denote this method as CQL(P).
6.1 MULTI-OBJECTIVE OFFLINE BENCHMARK
Hypervolume. We compare hypervolume of our methods with all baselines on expert datasets in Table 1 as well as amateur dataset in Table 2. For the two-objective environments, we evaluate the
models on 501 equally spaced preference points in the range [0, 1]; on the three-objective environment MO-Hopper-3obj, models are evaluated on 325 equally spaced points. Each point is evaluated 5 times with random environment re-initialization, and the median value is recorded. Finally, all the results are based on 3 random seeds and we report the mean performance along with the standard error. In Table 1 and Table 2, we can see that MODT(P) and MORVS(P) outperform other baselines and has a relatively very low standard error. Also, PEDA variants including MODT(P) and MORVS(P) approaches the behavioral policy upper-bound.
Sparsity. We also evaluate sparsity performance. Since sparsity comparison is only meaningful between models that are sensitive to preference and have a relatively similar hypervolume performance, we only show results for models that concatenate preference. Overall, MORVS(P) has the lowest sparsity in most environments, while at the same time featuring an outstanding hypervolume.
6.2 ABLATION STUDY
Pareto front approximation. We ablate how well the MODT(P) and MORVS(P) can approximate the Pareto front through conditioning on different preference points. We show the results in Figure 4, where we can see that the models can approximate the Pareto front while having some dominated points colored in pink mostly in the MO-Hopper and MO-Walker2d environments. The results are based on the average of 3 seeds, and the full plot can be found in Appendix G.
Return distribution. We ablate how well MODT(P) and MORVS(P) follow their given target return, based on a normalized and weighted value. We present the results in Figure 5 for MORVS(P) under High-H-Expert datasets and refer to Appendix H for full settings. Here, we see that the models follow the oracle line nicely when conditioned on the target within the dataset distribution, and generalize to targets outside of the dataset distribution as well.
7 CONCLUSION
We proposed a new problem setup for offline Multi-Objective Reinforcement Learning to scale Pareto-Efficient decision-making using offline datasets. To characterize progress, we introduced D4MORL, a dataset benchmark consisting of offline datasets generated from behavioral policies of different fidelities (expert/amateur) and rolled out under preference distributions with varying entropies (high/medium/low). Then, we propose PEDA, a family of offline MORL policy optimization algorithms based on decision transformers. To our knowledge, the PEDA variants are the first offline MORL policies that support continuous action and preference spaces. We showed that by concatenating and embedding preferences together with other inputs, our policies can effectively approximate the Pareto front of the underlying behavioral policy as measured by the hypervolume and sparsity metrics. Our proposed family includes MLP and transformer-based variants, viz. the MORVS(P) and MODT(P), with MORVS(P) performing the best overall. In some scenarios, the learned policies can also generalize to higher target rewards that exceed the data distribution.
REPRODUCIBILITY STATEMENT
Our code is available at: https://github.com/baitingzbt/PEDA.
ACKNOWLEDGEMENTS
AG’s research is supported by a Meta Research Award and a Cisco grant.
A ENVIRONMENT DESCRIPTION
All environments are the same as in Xu et al., 2020, except for when resetting the environment, each parameter is uniformly sampled from the rx´ 10´3, x` 10´3s with x being the default value. Except we always reset height as 1.25 for MO-Hopper and MO-Hopper-3objsince this parameter directly relates to the reward function. All environments have a max episode length of 500 steps per trajectory, but the agent may also die before reaching the maximum length.
A.1 MO-ANT
The two objectives in MO-Ant are achieved distance in x and y axes respectively, denoted as r “ rrvxt , r vy t s⊺.
Consider the position of the agent is represented as pxt, ytq at time t and takes the action at. The agent has a fixed survival reward rs “ 1.0, dt “ 0.05, and an action cost of ra “ 12 ř k a 2 k. The rewards are calculated as:
rvxt “ pxt ´ xt´1q { dt ` rs ´ ra rvyt “ pyt ´ yt´1q { dt ` rs ´ ra (1)
A.2 MO-HALFCHEETAH
The two objectives in MO-HalfCheetah are running speed, and energy saving, denoted as r “ rrvt , ret s⊺. Consider the position of the agent is represented as pxt, ytq at time t and takes the action at. The agent has a fixed survival reward rs “ 1.0, fixed dt “ 0.05, and an action cost of ra “ ř
k a 2 k. The
rewards are calculated as:
rvt “ mint4.0, pxt ´ xt´1q { dtu ` rs ret “ 4.0 ´ ra ` rs (2)
A.3 MO-HOPPER
The two objectives in MO-Hopper are running and jumping, denoted as r “ rrr, rjs⊺. Consider the position of the agent is represented as pxt, htq at time t and takes the action at. The agent has a fixed survival reward rs “ 1.0, a fixed initial height as hinit “ 1.25, a fixed dt “ 0.01, and an action cost of ra “ 2 ˆ 10´4 ř
k a 2 k. The rewards are calculated as:
rrt “ 1.5 ˆ pxt ´ xt´1q { dt ` rs ´ ra rjt “ 12 ˆ pht ´ hinitq { dt ` rs ´ ra (3)
A.4 MO-HOPPER-3OBJ
The physical dynamics are the same in MO-Hopper and MO-Hopper-3obj, while this environment has 3 objectives: running, jumping, and energy saving. The rewards are denoted as r “ rrr, rj , res⊺. Consider the position of the agent is represented as pxt, htq at time t and takes the action at. The agent has a fixed survival reward rs “ 1.0, a fixed initial height as hinit “ 1.25, a fixed dt “ 0.01, and an action cost of ra “ ř
k a 2 k. The rewards are calculated as:
rrt “ 1.5 ˆ pxt ´ xt´1q { dt ` rs
rjt “ 12 ˆ pht ´ hinitq { dt ` rs ret “ 4.0 ´ ra ` rs (4)
A.5 MO-SWIMMER
The two objectives in MO-Swimmer are speed and energy saving, denoted as r “ rrv, res⊺. Consider the position of the agent is represented as pxt, ytq at time t and takes the action at. The agent has a fixed dt “ 0.05, and an action cost of ra “ ř
k a 2 k. The rewards are calculated as:
rvt “ pxt ´ xt´1q { dt ret “ 0.3 ´ 0.15 ˆ ra
(5)
A.6 MO-WALKER2D
The objectives in MO-Walker2d are speed and energy saving, denoted as r “ rrv, res⊺. Consider the position of the agent is represented as pxt, ytq at time t and takes the action at. The agent has a fixed survival reward rs “ 1.0, a fixed dt “ 0.008, and an action cost of ra “ ř
k a 2 k.
The rewards are calculated as:
rvt “ pxt ´ xt´1q { dt ` rs ret “ 4.0 ´ ra ` rs (6)
To uniformly sample the High-H data from the entire preference space, the problem is equivalent to sampling from a n-dimensional simplex, where n is the number of objectives. The resulting sampling is:
ωhigh „ ||fexpp ¨ , λ “ 1q||1 (7)
We take the 1-norm following the exponential distribution to make sure each preference add up to 1. When Ω˚ ‰ Ω, we perform rejection sampling to restrict the range. To sample the Med-H and Low-H data, we first sample α from a non-negative uniform distribution, then sample the corresponding Dirichlet preference. Here, we sample a different alpha to make sure the center of the Dirichlet changes and thus allows more variation.
ωmed „ fDirichletpαq ; where α „ Unifp0, 106q ωlow „ fDirichletpαq ; where α „ Unifp1{3 ˆ 106, 2{3 ˆ 106q
(8)
For sampling from behavioral policy consists of a group of single-objective policies πβ “ tπ1, . . . , πBu with B being the total number of candidate policies, we recommend first find the expected unweighted raw rewards Gπ1 , . . . ,GπB . Then, find the estimated ω̂π1 , . . . , ω̂πB by letting ω̂πbi “ G πb i
řn j“1 G πb j
, which represents the estimated preference on ith objective of bth candidate
policy. For a sampled preference ω „ Ω˚, use the policy that provides the smallest euclidean distance dpω, ω̂πbq. Empirically, this means picking the candidate policy that has the expected reward ratio closest to ω.
B DATASET DETAILS
To uniformly sample the High-H data from the entire preference space, we can equivalently sample from a n-dimensional simplex, where n is the number of objectives. The resulting sampling scheme is:
ωhigh „ ||fexpp ¨ , λ “ 1q||1 (9)
The 1-norm following the exponential distribution makes sure each preference vector have entries add up to 1. When Ω˚ ‰ Ω, we perform rejection sampling to restrict the range.
To sample the Med-H and Low-H data, we first sample α from a non-negative uniform distribution, then sample the corresponding Dirichlet preference. Here, we sample a different alpha to make sure the mode of the Dirichlet changes and thus allows more variation.
ωmed „ fDirichletpαq ; where α „ Unifp0, 106q ωlow „ fDirichletpαq ; where α „ Unifp1{3 ˆ 106, 2{3 ˆ 106q
(10)
Since our behavioral policy is consists of a group of single-objective policies πβ “ tπ1, . . . , πBu with B being the total number of candidate policies, we first find the expected unweighted raw rewards Gπ1 , . . . ,GπB . Then, we find the estimated preferences ω̂π1 , . . . , ω̂πB by letting ω̂πbi “
G πb i
řn j“1 G πb j on ith objective of bth candidate policy. For each sampled preference ω „ Ω˚ following (9) or (10), we sample a complete trajectory using the single-objective behavioral policy that provides the smallest euclidean distance min dpω, ω̂πbq. Empirically, this means picking the candidate policy that has the expected reward ratio closest to ω.
C EXPERT & AMATEUR DATASETS
In Expert collection, we sample trajectories using the fully-trained behavioral policy πβ . In this paper, we use PGMORL by Xu et al., 2020 as our behavioral policy πβ
aexpertt`1 “ πβpa|s “ st,ω “ ωtq (11)
In the Amateur collection, the policies has a 35% chance being stochastic on top of the expert collection. Actions has a chance being stochastic, during which it is scaled from the expert action, as following:
aamateurt`1 “ " aexpertt`1 35% aexpertt`1 ˆ Unifp0.35, 1.65q 65%
(12)
In the MO-Swimmer environment only, we let actions has a 35% chance to be a uniform random sample from the entire action space rather than being the same as expert to increase variance and achieve a performance similar to amateur. The resulting strategy for MO-Swimmer is:
aamateurt`1 “ " UnifpAq 35% aexpertt`1 ˆ Unifp0.35, 1.65q 65%
(13)
D FINDING APPROPRIATE MULTI-OBJECTIVE RTG
In Decision Transformer Chen et al. (2021) and Emmons et al. (2021), RTG denotes the future desired reward. In MORL, however, designing appropriate multi-objective RTG is necessary. On top
of discounting each objective’s desired reward separately, we empirically find that since some objectives are inherently conflicting, setting RTG high for one objective means we should accordingly lower RTG for other objectives (i.e. we shouldn’t use maximum RTG for both). In this way, our test-time RTG can follow closer to the training distribution.
In this paper, we use linear regression G “ fpωq to find corresponding RTG conditioned on the given preference. Figure 6 demonstrates the weighted RTG of the “running” objective as a function of its preference in MO-Hopper where the conflicting objectives are “running” and “jumping”. It is clear that RTG closely correlates with the conditioned preference for running and we should adjust the initial RTG during test-time accordingly.
Finally, we only use the learned linear regression model from Expert dataset. This is because regression models fitted on sub-optimal data can easily produce an RTG lower than optimal. In practice, we can easily achieve a similar result by training the regression model only on the best-performing trajectories for respective preferences. Other regression or clustering methods to find appropriate RTG can also work, and we leave it as future works especially when not assuming linearly weighted objectives.
E TRAINING DETAILS
In this section, we list our hyper-parameters and model details. In specific, we use the same hyperparameters for all algorithms, except for the learning rate scheduler and warm-up steps. In MODT family, inputs are first embedded by a 1-layer fully-connected network, and n layer represents the number of transformer blocks; in BC family, n layer represents the number of MLP layers to embed each input; in MORVS and MORVS(P), we leverage the same embedding strategy in Emmons et al. (2021). Additionally, we consider MORVS and MORVS(P) both have context length of 1 because they only use the current state to predict the next action, whereas MODT and BC use the past 20.
E.1 PARAMETERS
Hyperparameter MODT MORvS BC
Context Length - K 20 1 20 Batch Size 64
Hidden Size 512 Learning Rate 1e-4 Weight Decay 1e-3
Dropout 0.1 n layer 3
Optimizer AdamW Loss Function MSE LR Scheduler lambda None lambda
Warm-up Steps 10000 N/A 4000 Activation ReLU
E.2 TRAINING STEPS
Dataset Name MODT Steps RvS/BC Steps
MO-Ant 20K 200K MO-HalfCheetah 80K 200K
MO-Hopper 400K 200K MO-Hopper-3obj 400K 200K
MO-Swimmer 260K 200K MO-Walker2d 360K 200K
E.3 OTHER ATTEMPTED ARCHITECTURES FOR MODT AND MODT(P)
We tried the following MODT architectures in our preliminary experiments. We picked Case 4 eventually as it gave the best performance in our experiments.
1. Consider ω as an independent token of the transformer.
2. Train a separate embedding for ω, concatenate the embeddings to get fϕspsq À fϕω pωq, fϕapaq À fϕω pωq, and fϕg pgq À
fϕω pωq then pass into the transformer. 3. Add another MLP layer on top of Case 2 after concatenation, then pass output into the
transformer.
4. Concatenate ω to other tokens before any layers. This means we have s˚ “ s À
ω, a˚ “ a À ω, and g˚ “ g À ω.
F OTHER EVALUATION METRICS
Among a variety of metrics for MORL, we use Hypervolume (HV) and Sparsity (SP) to benchmark models for several reasons. First, many metrics such as the ϵ-metric require prior knowledge of the true Pareto Fronts, which are not available for our MuJoCo Environments. Second, we only assume linear reward function and cannot collect real-time user feedback, thus utility-based metrics such as expected utility metric (EUM) are not applicable. Finally, using the same metric as in the original behavioral policy paper facilitate algorithm comparisons.
G PARETO SET VISUALIZATIONS
We present the Pareto Set visualizations for all of our models trained under each High-H dataset in Figure 7. Each point in each subplot is based on the average result of 3 seeds. In 2 objective environments, we evaluate the model using 501 equally spaced preference points. In 3 objective environments, we use 351 equally spaced preference points instead. Since the environments are stochastically initialized, we evaluate 5 times at each preference point and take the mean value. This makes each point the average value of 15 runs. We here allow a small tolerance for coloring the dominated points.
If a preference point is within the achievable preference Ω˚ but the solution is dominated, we color it in red. Since our models are conditioned on continuous preference points and environments are initialized stochastically, we give a small tolerance (3%-8%) for points to be colored in blue. The hypervolume and sparsity metrics, on the other hand, are based on strictly undominated solutions without tolerance.
H MEDIUM & LOW ENTROPY DATASET TRAINING
We train on the Medium-Entropy and Low-Entropy datasets for the MO-HalfCheetah environment. Overall, models have a similar performance under Med-H and High-H datasets but suffer when only trained on Low-H. We present the results in Table 6, in which we illustrate that the Low-H dataset has a worse expert and amateur performance due to reduced variability on preference. However, MODT(P) and MORVS(P) are still able to get close to or exceed in hypervolume on all
datasets, which showcases the effectiveness of PEDA as an efficient MORL policy. Results are based on an average of 3 seeds with the standard error given.
I TRAINING WITH 1-DIM RTG
We attempted to train MODT and RvS with 1-dim return-to-go rather than a separate rtg for each objective. According to results on MO-HalfCheetah and the High-H datasets in 7, using multidimensional RTG enhances the performance of MODT(P), and are about the same for MORVS(P) when preference is concatenated to states. However, it reduces standard error significantly in both MODT(P) and MORVS(P). In the naive models when preferences are not concatenated to states, using a multi-dimensional RTG helps to achieve a much more competitive hypervolume. We thus believe multi-dimensional RTG conveys important preference information when the model doesn’t directly take preference as an input. Results are based on an average of 3 seeds with the standard error given. | 1. What is the focus and contribution of the paper regarding offline multi-objective reinforcement learning?
2. What are the strengths and weaknesses of the proposed approach, particularly in its application of return-conditioned sequence modeling?
3. Do you have any concerns regarding the exploration parameter p in the data generation process for the amateur policy?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. What is the significance of the benchmarking dataset introduced in the paper, and how does it contribute to the community? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper studies offline multi-objective reinforcement learning (offline MORL). In the first half of the paper, the authors introduce a new benchmarking dataset. For 6 tasks it contains trajectories with different preferences, where for each preference we have data from an expert policy (with similar preference profile) and a noisy amateur policy. In the second half of the paper, the authors introduce a new approach for offline MORL, based on return-conditioned sequence modeling. Experiments show that their method outperforms baseline approaches on their own benchmark.
Strengths And Weaknesses
Strength:
The paper is well written and clear.
The benchmarking dataset is a useful contribution for the community.
Related work is well covered.
The paper contains very useful illustrations, such as Fig 1 to illustrate Sec. 3, and Figures 2 and 3 to illustrate the data generation process for the benchmark.
Extensive experiments. The generalization performance in Fig. 5 is very impressive.
Weaknesses:
For data generation of the amateur policy, you set the exploration parameter p to 65%. This seems rather high to me: in most problems you will not get very far with such high exploration. Couldn’t you vary the amount of exploration, from low to high, for different episodes? Or gradually increase the noise during the episode, to make sure that your amateur policy sometimes gets a bit further in the domain? I think you see this effect in Fig 2: in MO-Swimmer for example you see that the amateur policies cover a quite demarcated area of the return space, indicating that your exploration/noise scheme covers a too small region of the overall return space.
You extensively mention Decision Transformers in your abstract and introduction, but actually your best performing models are the MORvS(P) ones, that do not use a transformer. I think you would need to phrase this differently.
On the algorithmic side, the innovation is mostly in the application of the (return-conditioned) sequence modeling approach to the offline RL setting. There are some details about how to feed the preferences and returns into the model, with some claims about what worked and did not work (without results though). It is a useful insight to use sequence modeling for offline RL though, since you typically want to stay close to the original data.
Top of page 7: You say that a scalarized \hat{g} and the preference vector omega can recover the vector valued g. This is not true right? Imagine omega = [0.5,0.5], and \hat{g] = 0.5, then both g = [1,0] and g=[0,1] would work (or any g for which sum(g) = 1.0). I think this explains why you need the elementwise product between g and omega (but why not feed them in completely separately?).
Minor:
Sec 2: previous work is “primarily” demonstrated on tabular tasks → But not only right? Try to be precise here, i.e., what is your extension?
Sec 3: I miss an explanation why “sparsity” is a relevant measure?
What type of noise distribution do you inject in generating the amateur policy? These are continuous tasks right, so do you use Gaussian noise, or simply a uniform distribution within the bounds of the action space?
Clarity, Quality, Novelty And Reproducibility
Clarity: Good Quality: Good Novelty: Reasonable Reproducibility: Work should be reproducible. |
ICLR | Title
Recurrent Parameter Generators
Abstract
We present a generic method for recurrently using the same parameters for many different convolution layers to build a deep network. Specifically, for a network, we create a recurrent parameter generator (RPG), from which the parameters of each convolution layer are generated. Though using recurrent models to build a deep convolutional neural network (CNN) is not entirely new, our method achieves significant performance gain compared to the existing works. We demonstrate how to build a one-layer-size neural network to achieve similar performance compared to other traditional CNN models on various applications and datasets. We use the RPG to build a ResNet18 network with the number of weights equivalent to one convolutional layer of a conventional ResNet and show this model can achieve 67.2% ImageNet top-1 accuracy. Additionally, such a method allows us to build an arbitrarily complex neural network with any amount of parameters. For example, we build a ResNet34 with model parameters reduced by more than 400 times, which still achieves 41.6% ImageNet top-1 accuracy. Furthermore, the RPG can be further pruned and quantized for better run-time performance in addition to the model size reduction. We provide a new perspective for model compression. Rather than shrinking parameters from a large model, RPG sets a certain parametersize constraint and uses the gradient descent algorithm to automatically find the best model under the constraint. Extensive experiment results are provided to demonstrate the power of the proposed recurrent parameter generator.
1 INTRODUCTION
Deep learning has achieved great success with increasingly more training data and deeper & larger neural networks: A recently developed NLP model, GPT-3 (Brown et al., 2020), has astonishingly 175 billion parameters! While the model performance generally scales with the number of parameters (Henighan et al., 2020), with parameters outnumbering training data, the model is significantly over-parameterized. Tremendous effort has been made to reduce the parameter redundancy from different perspectives, including neural network pruning (LeCun et al., 1990; Han et al., 2016; Liu et al., 2018), efficient network design spaces (Howard et al., 2017; Iandola et al., 2016; Sandler et al., 2018), parameter regularization (Wan et al., 2013; Wang et al., 2020a; Srivastava et al., 2014), model quantization (Hubara et al., 2017; Rastegari et al., 2016; Louizos et al., 2019), neural architecture search (Zoph & Le, 2017; Cai et al., 2018; Wan et al., 2020), recurrent models (Bai et al., 2019; 2020; Wei et al., 2016), multi-task feature encoding (Ramamonjisoa & Lepetit, 2019; Hao et al., 2021), etc.
One of the most prominent approaches in this direction is the pruning-based model compression, which dates back to the late 80s or early 90s (Mozer & Smolensky, 1989; LeCun et al., 1990) and has enjoyed a resurgence (Han et al., 2016; Blalock et al., 2020) recently. These pruning methods seek to remove the unimportant parameters from a pre-trained large neural network and can frequently achieve an enormous model-compression ratio.
Though sharing a similar motivation to reduce the parameter redundancy, we explore an entirely different territory of model parameter reduction: rather than compressing a large model, we define an arbitrarily large model based on a fixed set of parameters to maximize the model capacity. In this work, we propose to define many different layers in a deep neural network based on a fixed amount of parameters, which we call recurrent parameter generator (RPG). That is, we differentiate the number of model parameters and degrees of freedom (DoF). Traditionally, model parameters are treated independently of each other; the total number of parameters is the number of DoF. However, by tapping into how a core set of free parameters can be assigned to the neural network model, we can develop a large model of many parameters with a small degree of freedom. In other words,
there is excess capacity in neural network models independent of how and where the parameters are used in the network. Even at the level of individual scalar values, parameters can be reused in another arbitrary location of the deep network architecture without significantly impacting model performance. Surprisingly, backpropagation training of a deep network is able to cope with that the same parameter can be assigned to multiple random locations in the network without significantly impacting model performance. Through extensive experiments, we show that a large neural network does not need to be over overparameterized to achieve competitive performance. Particularly, a ResNet18 can be implemented with the number of weights equivalent to one convolution layer in a conventional ResNet (4.72× parameter reduction) and still achieve 67.2% ImageNet top-1 accuracy. The proposed method is also extremely flexible in reducing the model parameters. In some sense, the proposed RPG method can be viewed as an automatic model parameter reduction technique, which explores the optimal accuracy-parameter trade-off. When we reduce the model parameter, RPG shows graceful performance degradation, and its compression results are frequently on par with the SOTA pruning methods besides the flexibility. Even if we reduce the ResNet18 backbone parameters to 36K, which is about 300× reduction, ResNet18 can still achieve 40.0% ImageNet top-1 accuracy. Notably, we choose a destructive parameter sharing method (Cheung et al., 2019) for RPG in this work, which discourages any potential representation sharing from layer to layer. Compared to other recurrent weight-sharing methods, e.g., convolutional pose machine (CPM) or multi-scale deep equilibrium models (MDEQ), our method achieves competitive performance on various benchmarks. Further, we show RPG can be quantized and pruned to improve FLOPs and run time with very tiny accuracy drops. This makes RPG a strong and practical baseline for probing whether there is nontrivial representation sharing within any recurrent network.
To summarize, we make the following contributions: 1. This work provides a new perspective towards automatic model parameter reduction: we can
define a neural network with certain DoF constraint and let gradient descent optimization automatically find the best model under the desired constraint.
2. We propose the recurrent parameter generator (RPG), which decouples the network architecture and the network DoF. Given a certain neural network architecture, we can flexibly choose any DoF to construct the network.
3. By separating the network architecture from the parameter generator, RPG becomes a tool for us to understand the relationship between the model DoF and the network performance. We observe an empirical log-linear DoF-Accuracy relationship.
2 RELATED WORK
There are many important efforts to compress neural networks or to reduce the redundancy in neural network parameters. We discuss each of the approaches and their relationships to our work.
Model Pruning, Neural Architecture Search, and Quantization. Model pruning seeks to remove the unimportant parameters in a trained model. Recently, it’s proposed to use neural architecture search as a coarse-grained model pruning (Yu et al., 2018; Dong & Yang, 2019). Another related effort is neural network quantization (Hubara et al., 2017; Rastegari et al., 2016; Louizos et al., 2019), which seeks to reduce the bits used for each parameter and can frequently reduce the model size by 4× with minimal accuracy drop. More recently, Dollár et al. (2021) presents a framework for analyzing model scaling strategies that considers network properties such as FLOPs and activations.
Parameter Regularization and Priors. Another highly related direction is parameter regularization. Regularization has been widely used to reduce model redundancy (Krogh & Hertz, 1992), alleviate model overfitting (Srivastava et al., 2014; Wan et al., 2013), and ensure desired mathematical regularity (Wang et al., 2020a). RPG can be viewed as a parameter regularization in the sense that weight sharing poses many equality constraints to weights and regularizes weights to a low-dimensional space. HyperNeat (Stanley et al., 2009) and CPPNs (Stanley, 2007) use networks to determine the weight between two neurons as a function of their positions. Karaletsos et al. (2018) and Karaletsos & Bui (2020) introduced a similar idea by providing a hierarchical prior for network parameters.
Recurrent Networks and Deep Equilibrium Models. Recurrence and feedback have been shown in psychology and neuroscience to act as modulators or competitive inhibitors to aid feature grouping (Gilbert & Sigman, 2007), figure-ground segregation (Hupé et al., 1998) and object recognition (Wyatte et al., 2012). Recurrence-inspired mechanisms also achieve success in feed-forward models. There are two main types of employing recurrence based on if weights are shared across recurrent modules. ResNet (He et al., 2016), a representative of reusing similar structures without weight sharing, introduces parallel residual connections and achieves better performance by going deeper in networks. Similarly, some works (Szegedy et al., 2015; Srivastava et al., 2015) also suggest iteratively injecting thus-far representations to the feed-forward network useful. Stacked inference methods (Ramakrishna et al., 2014; Wolpert, 1992; Weiss & Taskar, 2010) are also related while they consider each output in isolation. Several works find sharing weights across recurrent modules beneficial. They demonstrate applications in temporal modelling (Weiss & Taskar, 2010; Xingjian et al., 2015; Karpathy & Fei-Fei, 2015), spatial attention (Mnih et al., 2014; Butko & Movellan, 2009), pose estimation (Wei et al., 2016; Carreira et al., 2016), and so on (Li et al., 2016; Zamir et al., 2017). Such methods usually shine in modeling long-term dependencies. In this work, we recurrently share weights across different layers of a feedback network to reduce network redundancy.
Given stacking weight-shared modules improve the performance, researchers consider running even infinite depth of such modules by making the sequential modules converge to a fixed point (LeCun et al., 1988; Bai et al., 2019). Employing such equilibrium models to existing networks, they show improved performance in many natural language processing (Bai et al., 2019) and computer vision tasks (Bai et al., 2020; Wang et al., 2020b). One issue with deep equilibrium models is that the forward and backward propagation usually takes much more iterations than explicit feed-forward networks. Some work (Fung et al., 2021) improves the efficiency by making the backward propagation Jacobian free. Another issue is that infinite depth and fixed point may not be necessary or even too strict for some tasks. Instead of achieving infinite depth, our model shares parameters to a certain level. We empirically compare with equilibrium models in Section 5.
Efficient Network Space and Matrix Factorization. Convolution is an efficient and structured matrix-vector multiplication. Arguably, the most fundamental idea in building efficient linear systems is matrix factorization. Given the redundancy in deep convolutional neural network parameters, one can leverage the matrix factorization concept, e.g., factorized convolutions, and design more efficient network classes (Howard et al., 2017; Iandola et al., 2016; Tan & Le, 2019; Sandler et al., 2018).
3 RECURRENT PARAMETER GENERATOR
We define recurrent parameter generators and show a certain kind of generating matrices that leads to destructive weight sharing. For better parameter capacity, we introduce an even sampling strategy.
Recurrent Parameter Generator. Assuming we are constructing a deep convolutional neural network, which contains L different convolution layers. Let K1,K2, . . . ,KL be the corresponding L convolutional kernels 1. Rather than using separate sets of parameters for different convolution layers, we create a single set of parameters W ∈ <N and use it to generate the corresponding parameters for each convolution layer: Ki = Ri ·W, i ∈ {1, . . . , L} (1) where Ri is a fixed predefined generating matrix, which is used to generate Ki from W. We call {Ri} and W the recurrent parameter generator (RPG). In this work, we always assume that the size of W is smaller than the total parameters of the model, i.e., |W| ≤ ∑ i |Ki|. This means an element of W will generally be used in more than one layer of a neural network. Additionally, the gradient of W is a linear superposition of the gradients from each convolution layer. During the neural network training, let’s assume convolution kernel Ki receives gradient ∂`∂Ki , where ` is the loss function. Based on the chain rule, it is clear that the gradient of W is:
∂`
∂W = L∑ i=1 RTi · ∂` ∂Ki (2)
Generating Matrices. There are many different ways to create the generating matrices {Ri}. In this work, we primarily explore the destructive generating matrices, which tend to prevent different kernels from sharing the representation during weight sharing.
Destructive Weight Sharing. For easier discussion, let us first look at a special case, where all of the convolutional kernels have the same size and are used in the same shape in the corresponding convolution layers. In other words, {Ri} are square matrices, and the spatial sizes of all of the convolutional kernels have the same size, din × dout ×w× h, and the input channel dimension din is always equal to the output channel dimension dout. In this case, a filter f in a kernel can be treated as a vector in <dwh. Further, we choose Ri to be a block-diagonal matrix Ri = block-diag{Ai,Ai, . . . ,Ai}, where Ai ∈ O(dwh) is an orthogonal matrix that rotates each filter from the kernel Ki in the same fashion. Similar to the Proposition 2 in (Cheung et al., 2019), we show in the Appendix B that: if Ai, Aj are sampled from the O(M) Haar distribution and fi, fj are the same filter (generated by Ri, Rj respectively from W) from Ki, Kj respectively, then we have E [〈fi, fj〉] = 0 and E [ 〈 fi‖fi‖ , fj ‖fj‖ 〉 2 ] = 1M . Since M is usually large, the same filter from Ki, Kj are close to orthogonal and generally dissimilar. This shows that even when {Ki} are generated from the same W, they are not sharing the representation.
Even though {Ai} are not updated during the training, the size of Ai can be quite large in general. In practice, we can use permutation p ∈ P (M) and element-wise random sign reflection b ∈ B(M) to construct a subset of the orthogonal group O(M), i.e. we choose Ai ∈ {b ◦ p| b ∈ B(M), p ∈ P (M)}.2 Since pseudo-random numbers are used, it takes only two random seeds to store a random permutation and an element-wise random sign reflection.
In this we work, we generalize the usage of Ri beyond the block-diagonal generating matrices described above. {Ki} may have different sizes, which can be chosen even larger than the size of W. When Ki ∈ <Ni is larger than W ∈ <N , the corresponding generating matrix Ri is a tall matrix. There are many ways to efficiently create the generating matrices. We use random permutations P (Ni) and element-wise random sign reflections B(Ni) to create Ri:
Ri ∈ {b ◦ p| b ∈ B(Ni), p ∈ P (Ni)}, i = 1, . . . , L (3) {Ri} tend to lead to destructive weight sharing and lead to better utilization of the parameter capacity. Even Parameter Distribution for Different Layers. While it is easy to randomly sample elements from the W when generating parameters for each layer, it may not be optimal as some elements in
1In this paper, we treat each convolutional kernel as a vector. When the kernel is used to do the convolution, it will be reshaped into the corresponding shape.
2Permutations and element-wise random sign reflection conceptually are subgroups from the orthogonal group, but we shall never use them in the matrix form for the obvious efficiency purpose.
W may never be used, and some elements may be used more than average. We use an equalization technique to guarantee all elements of W are evenly sampled. Suppose the size of the W is N , and the total size of parameters of layers to be generated is M , M > N . We first generate b∗cMN arrays {x|x = 1, .., N} and concatenate them with (M mod N) elements randomly sampled from array {x|x = 1, ..,M}. We call the concatenated array index array u ∈ <M . We randomly shuffle all elements in u. When initializing each layer’s parameter, we sequentially get indices of chosen elements from the shuffled index array u. In this way, each layer’s parameters are randomly and evenly sampled from W. We refer to W as model rings since elements are recurrently used in a loop. For data saving efficiency, we just need to save two random seed numbers (one for sampling (M mod N) elements and one for shuffling) instead of saving the large index array u.
Batch Normalization. Model performance is relatively sensitive to the batch normalization parameters. For better performance, each of the convolution layers needs to have its own batch normalization parameters. In general, however, the size of batch normalization is relatively negligible. Yet when W is extremely small (e.g., 36K parameters), the size of batch normalization should be considered.
4 RECURRENT PARAMETER GENERATOR AT MULTIPLE SCALES
In the previous section, we discuss the general idea of superposition where only one RPG is created and shared globally across all layers. We could also create several local RPGs, and each of them is shared at certain scales, such as blocks and sub-networks. Such super-positions may be useful for certain applications such as recurrent modeling.
RPGs at Block-Level. Researchers propose network architectures that reuse the same design of network blocks multiple times for higher learning capacity, as discussed in the related work. Instead of using one global RPG for the entire network, we could alternatively create several RPGs that are shared within certain network blocks. We take ResNet18 (He et al., 2016) as a concrete example (Fig.3). ResNet18 has four building blocks. Every block has 2 residual convolution modules. To superpose ResNet18 at block scale, we create four local RPGs. Each RPG is shared within the corresponding building block, where the size of the RPG is flexible and can be determined by users.
RPGs at Sub-Network-Level. Reusing sub-networks, or recurrent networks have achieved success in many tasks as they iteratively refine and improves the prediction. Usually, weights are shared when reusing the sub-networks. This may not be optimal as sub-networks at different stages iteratively improve the prediction, and shared weights may limit the learning capacity to adapt for different stages. On the other hand, not sharing weights at all greatly increases the model size. We superpose different sub-networks with one or more RPGs. Superposition sub-networks could have a much smaller model size, while parameters for different sub-networks undergo destructive changes instead of directly copy-paste. We show applications of superpose sub-networks for pose estimation and multitask regression (Section 5.3 and 5.4).
5 EXPERIMENTAL RESULTS
We evaluate the performance of RPG with various tasks illustrated in Fig.2. We refer to model DoF as number of parameters or parameter size for convenience, although their differences have been discussed in Introduction. For classification, RPG was used for the entire network except for the last fully connected (fc) layer. Thus, we discuss reduction in backbone parameters. For example, Res18 has 11M backbone parameters and 512K fc parameters, and RPG was applied to reduce 11M backbone parameters only. Experiments are conducted on NVIDIA GeForce GTX 2080Ti GPUs.
5.1 CIFAR CLASSIFICATION
Implementation Details. All CIFAR experiments use batchsize of 128, weight decay of 5e-4, and initial learning rate of 0.1 with gamma of 0.1 at epoch 60, 120 and 160. We use Kaiming initialization (He et al., 2015) with adaptive scaling. Specifically, shared parameters are initialized with a particular variance and scale the parameters for each layer to make it match the Kaiming initialization.
Compared to Deep Equilibrium Models. As a representative of implicit models, deep equilibrium models (Bai et al., 2019) can reduce model redundancy by finding fix points via additional optimizations. We compare the image classification accuracy on CIFAR10 and CIFAR100 datasets, as well as the inference time on CIFAR100 (Table 1). Following the settings of MDEQ (Bai et al., 2020), an image was sequentially fed into the initial convolutional block, the multi-scale deep equilibrium block (dubbed as MS block), and the classification head. MDEQ (Bai et al., 2020) achieves infinite MS blocks by finding the fixed point of the MS block. We reuse the MS block two to four times without increasing the number of parameters. Our RPG achieves 3.4% - 5.8% gain on CIFAR10 and 3% - 5.9% gain on CIFAR100. Our inference time is 15 - 25 times smaller than MDEQ since MDEQ needs additional time to solve equilibrium during training.
Global RPG with Varying # Parameters. We create one global RPG to generate parameters for convolution layers of ResNet and refer to it as ResNet-RPG. We report CIFAR100 top-1 accuracy of ResNet-RPG18 and ResNet-RPG34 at different number of parameters (Fig.4 and Table 3). Compared to ResNet, ResNet-RPG achieves higher accuracy at the same parameter size. Specifically, we achieve 36% CIFAR100 accuracy with only 8K backbone parameters. Furthermore, ResNet34-RPG achieves higher accuracy than ResNet18-RPG, indicating increasing time complexity gives performance gain.
Local RPGs at the Block-Level. In the previous ResNet-RPG experiments, we use one global RPG (Fig.3-Upper).We also evaluate the performance when RPGs are shared locally at a block level, as shown in Fig.3-Lower. In Table 2, compared to plain ResNet18 at the same number of parameters, our block-level RPG network gives 1.0% gain. In contrast, our ResNet-RPG (parameters are evenly distributed) gives a 1.4% gain. Using one global RPG where parameters of each layer are evenly distributed is 0.4% higher than multiple RPGs.
Comparison to Baselines. Table 2 compares RPG and other baseline parameter reduction methods including random weight sharing, weight sharing with the hashing trick (Chen et al., 2015) and weight sharing with Lego filters Yang et al. (2019). At the same number of parameters, our RPG outperforms all other baselines, demonstrating the effectiveness of the proposed method.
5.2 IMAGENET CLASSIFICATION
Implementation Details. All ImageNet experiments use batch size of 256, weight decay of 3e-5, and an initial learning rate of 0.3 with gamma of 0.1 every 75 epochs and 225 epochs in total. Our schedule is different from the standard schedule as the weight-sharing mechanism requires different training dynamics. We tried a few settings and found this one to be the best for RPG.
RPG with Varying # Parameters. We use one RPG with different number of parameters for ResNet and report the top-1 accuracy (Table 3 and Fig.1(Right)). ResNet-RPGs consistently achieve higher performance compared to ResNets under the same number of parameters. Specifically, ResNetRPG34 achieves the same accuracy 73.4% as ResNet34 with only half of ResNet34 backbone parameters. ResNet-RPG18 also achieves the same accuracy as ResNet18 with only half of ResNet18 backbone parameters. Further, we find RPG networks have higher generalizability (Section 5.6).
Power Law. Empirically, accuracy and number of parameters follow a power law, when RPG model size is lower than 50% original plain ResNet model size. The exponents of the power laws are the same for ResNet18-RPG and ResNet34-RPG on ImageNet, when comparing with ResNet34 accuracies. The scaling law may be useful for estimating the network performance without training the network. Similarly, (Henighan et al., 2020) also identifies a power law for performance and model size of transformers. The proposed RPG enables under-parameterized models for large-scale datasets such as ImageNet, which may unleash more new studies and findings.
5.3 POSE ESTIMATION
Implementation Details. We superpose sub-networks for pose estimation with a globally shared RPG. We use hourglass networks (Newell et al., 2016) as the building backbone. The input image is first fed to an initial convolution block to obtain a feature map. The feature map is then fed to multiple stacked pose estimation sub-networks. Each sub-network outputs a pose estimation prediction, which is penalized by the pose estimation loss. Convolutional pose machine (CPM) (Wei et al., 2016) share all the weights for different sub-networks. We create one global RPG and generate parameters for each sub-network. Our model size is set to be the same as CPM. We also compare with larger models where parameters of sub-networks are not shared.
We evaluate on MPII Human Pose dataset (Andriluka et al., 2014), a benchmark for articulated human pose estimation, which consists of over 28K training samples over 40K people with annotated body joints. We use the hourglass network (Newell et al., 2016) as backbone and follow all their settings.
Results and Analysis. We report the Percentage of Correct Key-points at 50% threshold (PCK@0.5) of different methods in Table 4. CPM (Wei et al., 2016) share all parameters for different sub-
networks. We use one RPG that is shared globally at the same size as CPM. For reference, we also compare with the no-sharing model as the performance ceiling. Adding the number of recurrences leads to performance gain for all methods. At the same model size, RPG achieves higher PCK@0.5 compared to CPM. Increasing the number of parameters by not sharing sub-network parameters also leads to some performance gain.
5.4 MULTI-TASK REGRESSION
Implementation Details. We superpose sub-networks for multi-task regression with multiple RPGs at the building-block level. We focus on predicting depth and normal maps from a given image. We stack multiple SharpNet (Ramamonjisoa & Lepetit, 2019), a network for monocular depth and normal estimation. Specifically, we create multiple RPGs at the SharpNet building-block level. That is, parameters of corresponding blocks of different sub-networks are generated from the same RPG.
We evaluate the monocular depth and normal prediction performance on Standford 3D indoor scene dataset (Armeni et al., 2017). It contains over 70K images with corresponding depths and normals covering over 6,000 m2 indoor area. We follow all settings of SharpNet (Ramamonjisoa & Lepetit, 2019), a state-of-the-art monocular depth and normal estimation network.
Results and Analysis. We report the mean square errors for depth and normal estimation in Table 5. Compared to one-time inference without recurrence, our RPG network gives 3% and 2% gain for depth and normal estimation, respectively. Directly sharing weights but using new batch normalization layers decrease the performance by 1.2% and 0.3% for depth and normal. Sharing weights and normalization layers further decrease the performance by 0.7% and 0.9% for depth and normal.
5.5 PRUNING RPG
Fine-Grained Pruning. Fine-grained pruning methods aim at reducing the model parameters by sparsifying weight matrices. Such methods usually do not reduce the inference speed, although custom algorithms (Gale et al., 2020) may improve the speed. At the same number of parameters, RPG outperforms state-of-the-art fine-grained pruning method IMP (Frankle et al., 2019). Accuracy drops of RPG and IMP are similar, both around 2% (Table 6). It is worth noting that IMP could have faster inference speed with sparse GPU kernels (Gale et al., 2020).
Coarse-Grained Pruning. While RPG is not designed to reduce FLOPs, it can be combined with coarse-grained pruning to reduce FLOPs. We prune RPG filters with lowest `1 norms. Table 7 shows that the pruned RPG achieves on-par performance as state-of-the-art coarse-grained pruning method Knapsack (Aflalo et al., 2020) at the same FLOPs.
5.6 ANALYSIS
Comparison to Pruning Methods. We report our ResNet18-RPG performance with different number of parameters on ImageNet and some baseline pruning methods in Fig.1(Right). Our RPG networks outperform SOTA pruning methods such as (Aflalo et al., 2020; Dong & Yang, 2019; He et al., 2019; 2018; Dong et al., 2017; Khetan & Karnin, 2020). Specifically, at the same number of parameters, our RPG network has 0.6% gain over the knapsack pruning (Aflalo et al., 2020), a method that achieves the best ImageNet pruning accuracy.
Generalizability. We report the performance gap between training and validation set on ImageNet (Table 8(a)) and MPII pose estimation (Table 8(b)). CPM (Wei et al., 2016) serves as the baseline pose estimation method. RPG models consistently achieve lower gaps between training and validation set, indicating the RPG model suffers less from over-fitting.
We also report the out-of-distribution performance of RPG models. ObjectNet (Barbu et al., 2019) contain 50k images with 113 classes overlapping with ImageNet. Previous models are reported to
have a large performance drop on ObjectNet. We directly evaluate the performance of ImageNettrained model on ObjectNet without any fine-tuning (Table 8(c)). With the same number of backbone parameters, our ResNet-RPG achieves a 3.1% gain compared to ResNet18. With the same network architecture design, our ResNet-RPG achieves 0.5% gain compared to ResNet34. This indicates our RPG networks have higher out-of-distribution performance even with smaller model sizes.
Quantization. Network quantization can reduce model size with minimal accuracy drop. It is of interest to study if RPG models, whose parameters have been shrunk, can be quantized. After 8-bit quantization, the accuracy of ResNet18-RPG (5.6M parameters) only drop 0.1 percentage point on ImageNet, indicating RPG can be quantized for further size reduction. Details are in Appendix A.
Security. Permutation matrices generated by the random seed can be considered as security keys to decode the model. In addition, only random seeds to generate transformation matrices need to be saved and transferred, which is efficient in terms of size.
5.7 ABLATION STUDIES
We conduct ablation studies to understand the functions of permutation and reflection matrices (Fig.5). We evaluate ResNet-RPG34 with 2M backbone parameters. With permutation and reflection matrices leads to 76.5% accuracy, with permutation matrices only leads to 75.8%, with reflection matrices only leads to 71.1%, and with no permutation and reflection matrices leads to 70.7%. This suggests permutation and reflection matrices are useful for RPGs.
6 DISCUSSION
The common practice in machine learning is to search for the best model in large model space with many parameters or degrees of freedom (DoF), and then shrink the optimal model for deployment. Our key insight is that a direct and opposite approach might work better: We start from a lean model with a small DoF, which can be unpacked into a large model with many parameters. Then we can let the gradient descent automatically find the best model under this DoF constraint. Our work is a departure from mainstream approaches towards model optimization and parameter reduction. We show how the model DoF and actual parameter size can be decoupled: We can define a large model of arbitrary number of parameters with a small DoF.
We limit our scope to linear destructive weight sharing for different convolutional layers. However, in general, there might also exist nonlinear RPGs and efficient nonlinear generation functions to create convolutional kernels from a shared model ring W. Further, although RPG focuses on reducing model DoF, it can be quantized and pruned to further reduce the FLOPs and run time.
To sum up, we develop an efficient approach to build an arbitrarily complex neural network with any amount of DoF via a recurrent parameter generator. On a wide range of applications, including image classification, pose estimation and multitask regression, we show RPG networks consistently achieve higher performance at the same model size. Further, analysis shows that such networks are less possible to overfit and have higher performance on out-of-distribution data.
RPG can be added to any existing network flexibly with any amount of DoF at the user’s discretion. It provides new perspectives for recurrent models, equilibrium models, and model compression. It also serves as a tool for understanding the relationship between network properties and network DoF by factoring out the network architecture.
Reproducibility: We provide our code in supplementary materials. | 1. What is the main contribution of the paper on recurrent parameter generator (RPG)?
2. What are the strengths of the proposed approach, particularly in its simplicity and performance?
3. What are the weaknesses and concerns regarding the method, such as understanding why RPG works and potential biases in experiment settings?
4. How does the reviewer assess the novelty and reasonableness of RPG compared to other compressive or pruning methods?
5. Are there any questions or suggestions for future work, such as applying RPG to different architectures like Transformers or investigating deeper into the discussed topics like quantization and security? | Summary Of The Paper
Review | Summary Of The Paper
The paper proposes recurrent parameter generator (RPG) that is able to generate (ideally) arbitrarily large model based on a fixed set of inputs
W
. Unlike common pruning or compressing techniques, the authors argue that RPG decouples the expressivity with the degree of freedom of a model, and that we can dynamically generate model parameters on the fly (while taking advantage of the pseudo-random seed) with a simple mechanism. Experiments show that this new way of generating model parameters is able to perform on par with, or better than, many existing compressive or pruning approaches.
Review
Overall I find this work interesting, as RPG provides a simple (in the sense of how
K
i
's are generated from
W
) but performant strategy of parameterizing deep neural networks.
Strengths:
The method itself is simple in nature, although the actual design, such as the motivation for using permutations and sign reflections, remains somewhat unclear (e.g., why not use other ways to create orthogonal matrix?). It is surprising to me how well this method works.
The empirical results are relatively thorough, and the improvement over baselines is substantial.
Weaknesses and questions:
After reading this paper I still don't understand why RPG works. For example, compared to the conventional hypernetworks (which can be considered as a sort of input-based parameter generator, and so somewhat more reasonable), what is the new and key ingredient that RPG brings? From this point of view, while the exact form of RPG is novel, using a learnable module to generate parameters is not. While this is an empirical paper that presents a surprising finding, I feel that some discussions of why RPG is sensible/reasonable is lacking in the current version of the paper.
I have some doubts over whether some experiment settings may bias towards the RPG-based networks. For instance, typically ResNets are trained on ImageNet for 100 epochs, and generally more epochs bring (diminishingly) better results. With 100 epochs, ResNet-18/-34 are actually able to reach very good performance already. While the authors acknowledge that they found "this one [setting] to be the best for RPG", I wonder if we might want to do the same for the baseline models. For instance, the numbers reported in the original ResNet paper [1] seem already higher than in Table 3.
Although the method is extensively compared with other pruning or compression methods like IMP and Knapsack, the RPG method itself can also be seen as a standalone model that differs from pure compressive/pruning efforts. After all, there is nothing "compressed" or "sparse" about the modeling. The fixed generating matrix, while unlearned, is actually still parameters of the model, which just happens can be efficiently stored using the seed trick because of the way pseudo-random number generators work. But from a modeling point of view, the parameters
θ
of
f
(
⋅
,
θ
)
definitely still contains
R
i
. This means ResNet-RPG is only one instantiation of the RPG framework; and I'd be very interested in seeing whether RPG works for other architectures (e.g., Transformers) that are structurally very different. Implementation wise it should be pretty straightforward; you just need a new SuperConv1d or SuperLinear module.
I like that the paper gets into a lot of discussions (e.g., quantization, security, log-linear DoF-accuracy relationship), but it mostly just briefly touched the surface of these topics. These are all important questions to investigate further, and the current paper hasn't been able to provide much insight other than the good empirical performance with ResNets.
(This is a question, not a weakness.) For the comparison with MDEQ, do you use an RPG to generate the multiscale deep equilibrium layer's parameters (which creates multi-level feature map)? Or is it also just ResNet-RPG?
Minor:
Table 9 format is slightly messed up.
In Sec. 5.2, the paper claims "ResNet-RPG18 ... backbone parameters". Probably you want to point of Appendix A here.
[1] https://arxiv.org/pdf/1512.03385.pdf |
ICLR | Title
Recurrent Parameter Generators
Abstract
We present a generic method for recurrently using the same parameters for many different convolution layers to build a deep network. Specifically, for a network, we create a recurrent parameter generator (RPG), from which the parameters of each convolution layer are generated. Though using recurrent models to build a deep convolutional neural network (CNN) is not entirely new, our method achieves significant performance gain compared to the existing works. We demonstrate how to build a one-layer-size neural network to achieve similar performance compared to other traditional CNN models on various applications and datasets. We use the RPG to build a ResNet18 network with the number of weights equivalent to one convolutional layer of a conventional ResNet and show this model can achieve 67.2% ImageNet top-1 accuracy. Additionally, such a method allows us to build an arbitrarily complex neural network with any amount of parameters. For example, we build a ResNet34 with model parameters reduced by more than 400 times, which still achieves 41.6% ImageNet top-1 accuracy. Furthermore, the RPG can be further pruned and quantized for better run-time performance in addition to the model size reduction. We provide a new perspective for model compression. Rather than shrinking parameters from a large model, RPG sets a certain parametersize constraint and uses the gradient descent algorithm to automatically find the best model under the constraint. Extensive experiment results are provided to demonstrate the power of the proposed recurrent parameter generator.
1 INTRODUCTION
Deep learning has achieved great success with increasingly more training data and deeper & larger neural networks: A recently developed NLP model, GPT-3 (Brown et al., 2020), has astonishingly 175 billion parameters! While the model performance generally scales with the number of parameters (Henighan et al., 2020), with parameters outnumbering training data, the model is significantly over-parameterized. Tremendous effort has been made to reduce the parameter redundancy from different perspectives, including neural network pruning (LeCun et al., 1990; Han et al., 2016; Liu et al., 2018), efficient network design spaces (Howard et al., 2017; Iandola et al., 2016; Sandler et al., 2018), parameter regularization (Wan et al., 2013; Wang et al., 2020a; Srivastava et al., 2014), model quantization (Hubara et al., 2017; Rastegari et al., 2016; Louizos et al., 2019), neural architecture search (Zoph & Le, 2017; Cai et al., 2018; Wan et al., 2020), recurrent models (Bai et al., 2019; 2020; Wei et al., 2016), multi-task feature encoding (Ramamonjisoa & Lepetit, 2019; Hao et al., 2021), etc.
One of the most prominent approaches in this direction is the pruning-based model compression, which dates back to the late 80s or early 90s (Mozer & Smolensky, 1989; LeCun et al., 1990) and has enjoyed a resurgence (Han et al., 2016; Blalock et al., 2020) recently. These pruning methods seek to remove the unimportant parameters from a pre-trained large neural network and can frequently achieve an enormous model-compression ratio.
Though sharing a similar motivation to reduce the parameter redundancy, we explore an entirely different territory of model parameter reduction: rather than compressing a large model, we define an arbitrarily large model based on a fixed set of parameters to maximize the model capacity. In this work, we propose to define many different layers in a deep neural network based on a fixed amount of parameters, which we call recurrent parameter generator (RPG). That is, we differentiate the number of model parameters and degrees of freedom (DoF). Traditionally, model parameters are treated independently of each other; the total number of parameters is the number of DoF. However, by tapping into how a core set of free parameters can be assigned to the neural network model, we can develop a large model of many parameters with a small degree of freedom. In other words,
there is excess capacity in neural network models independent of how and where the parameters are used in the network. Even at the level of individual scalar values, parameters can be reused in another arbitrary location of the deep network architecture without significantly impacting model performance. Surprisingly, backpropagation training of a deep network is able to cope with that the same parameter can be assigned to multiple random locations in the network without significantly impacting model performance. Through extensive experiments, we show that a large neural network does not need to be over overparameterized to achieve competitive performance. Particularly, a ResNet18 can be implemented with the number of weights equivalent to one convolution layer in a conventional ResNet (4.72× parameter reduction) and still achieve 67.2% ImageNet top-1 accuracy. The proposed method is also extremely flexible in reducing the model parameters. In some sense, the proposed RPG method can be viewed as an automatic model parameter reduction technique, which explores the optimal accuracy-parameter trade-off. When we reduce the model parameter, RPG shows graceful performance degradation, and its compression results are frequently on par with the SOTA pruning methods besides the flexibility. Even if we reduce the ResNet18 backbone parameters to 36K, which is about 300× reduction, ResNet18 can still achieve 40.0% ImageNet top-1 accuracy. Notably, we choose a destructive parameter sharing method (Cheung et al., 2019) for RPG in this work, which discourages any potential representation sharing from layer to layer. Compared to other recurrent weight-sharing methods, e.g., convolutional pose machine (CPM) or multi-scale deep equilibrium models (MDEQ), our method achieves competitive performance on various benchmarks. Further, we show RPG can be quantized and pruned to improve FLOPs and run time with very tiny accuracy drops. This makes RPG a strong and practical baseline for probing whether there is nontrivial representation sharing within any recurrent network.
To summarize, we make the following contributions: 1. This work provides a new perspective towards automatic model parameter reduction: we can
define a neural network with certain DoF constraint and let gradient descent optimization automatically find the best model under the desired constraint.
2. We propose the recurrent parameter generator (RPG), which decouples the network architecture and the network DoF. Given a certain neural network architecture, we can flexibly choose any DoF to construct the network.
3. By separating the network architecture from the parameter generator, RPG becomes a tool for us to understand the relationship between the model DoF and the network performance. We observe an empirical log-linear DoF-Accuracy relationship.
2 RELATED WORK
There are many important efforts to compress neural networks or to reduce the redundancy in neural network parameters. We discuss each of the approaches and their relationships to our work.
Model Pruning, Neural Architecture Search, and Quantization. Model pruning seeks to remove the unimportant parameters in a trained model. Recently, it’s proposed to use neural architecture search as a coarse-grained model pruning (Yu et al., 2018; Dong & Yang, 2019). Another related effort is neural network quantization (Hubara et al., 2017; Rastegari et al., 2016; Louizos et al., 2019), which seeks to reduce the bits used for each parameter and can frequently reduce the model size by 4× with minimal accuracy drop. More recently, Dollár et al. (2021) presents a framework for analyzing model scaling strategies that considers network properties such as FLOPs and activations.
Parameter Regularization and Priors. Another highly related direction is parameter regularization. Regularization has been widely used to reduce model redundancy (Krogh & Hertz, 1992), alleviate model overfitting (Srivastava et al., 2014; Wan et al., 2013), and ensure desired mathematical regularity (Wang et al., 2020a). RPG can be viewed as a parameter regularization in the sense that weight sharing poses many equality constraints to weights and regularizes weights to a low-dimensional space. HyperNeat (Stanley et al., 2009) and CPPNs (Stanley, 2007) use networks to determine the weight between two neurons as a function of their positions. Karaletsos et al. (2018) and Karaletsos & Bui (2020) introduced a similar idea by providing a hierarchical prior for network parameters.
Recurrent Networks and Deep Equilibrium Models. Recurrence and feedback have been shown in psychology and neuroscience to act as modulators or competitive inhibitors to aid feature grouping (Gilbert & Sigman, 2007), figure-ground segregation (Hupé et al., 1998) and object recognition (Wyatte et al., 2012). Recurrence-inspired mechanisms also achieve success in feed-forward models. There are two main types of employing recurrence based on if weights are shared across recurrent modules. ResNet (He et al., 2016), a representative of reusing similar structures without weight sharing, introduces parallel residual connections and achieves better performance by going deeper in networks. Similarly, some works (Szegedy et al., 2015; Srivastava et al., 2015) also suggest iteratively injecting thus-far representations to the feed-forward network useful. Stacked inference methods (Ramakrishna et al., 2014; Wolpert, 1992; Weiss & Taskar, 2010) are also related while they consider each output in isolation. Several works find sharing weights across recurrent modules beneficial. They demonstrate applications in temporal modelling (Weiss & Taskar, 2010; Xingjian et al., 2015; Karpathy & Fei-Fei, 2015), spatial attention (Mnih et al., 2014; Butko & Movellan, 2009), pose estimation (Wei et al., 2016; Carreira et al., 2016), and so on (Li et al., 2016; Zamir et al., 2017). Such methods usually shine in modeling long-term dependencies. In this work, we recurrently share weights across different layers of a feedback network to reduce network redundancy.
Given stacking weight-shared modules improve the performance, researchers consider running even infinite depth of such modules by making the sequential modules converge to a fixed point (LeCun et al., 1988; Bai et al., 2019). Employing such equilibrium models to existing networks, they show improved performance in many natural language processing (Bai et al., 2019) and computer vision tasks (Bai et al., 2020; Wang et al., 2020b). One issue with deep equilibrium models is that the forward and backward propagation usually takes much more iterations than explicit feed-forward networks. Some work (Fung et al., 2021) improves the efficiency by making the backward propagation Jacobian free. Another issue is that infinite depth and fixed point may not be necessary or even too strict for some tasks. Instead of achieving infinite depth, our model shares parameters to a certain level. We empirically compare with equilibrium models in Section 5.
Efficient Network Space and Matrix Factorization. Convolution is an efficient and structured matrix-vector multiplication. Arguably, the most fundamental idea in building efficient linear systems is matrix factorization. Given the redundancy in deep convolutional neural network parameters, one can leverage the matrix factorization concept, e.g., factorized convolutions, and design more efficient network classes (Howard et al., 2017; Iandola et al., 2016; Tan & Le, 2019; Sandler et al., 2018).
3 RECURRENT PARAMETER GENERATOR
We define recurrent parameter generators and show a certain kind of generating matrices that leads to destructive weight sharing. For better parameter capacity, we introduce an even sampling strategy.
Recurrent Parameter Generator. Assuming we are constructing a deep convolutional neural network, which contains L different convolution layers. Let K1,K2, . . . ,KL be the corresponding L convolutional kernels 1. Rather than using separate sets of parameters for different convolution layers, we create a single set of parameters W ∈ <N and use it to generate the corresponding parameters for each convolution layer: Ki = Ri ·W, i ∈ {1, . . . , L} (1) where Ri is a fixed predefined generating matrix, which is used to generate Ki from W. We call {Ri} and W the recurrent parameter generator (RPG). In this work, we always assume that the size of W is smaller than the total parameters of the model, i.e., |W| ≤ ∑ i |Ki|. This means an element of W will generally be used in more than one layer of a neural network. Additionally, the gradient of W is a linear superposition of the gradients from each convolution layer. During the neural network training, let’s assume convolution kernel Ki receives gradient ∂`∂Ki , where ` is the loss function. Based on the chain rule, it is clear that the gradient of W is:
∂`
∂W = L∑ i=1 RTi · ∂` ∂Ki (2)
Generating Matrices. There are many different ways to create the generating matrices {Ri}. In this work, we primarily explore the destructive generating matrices, which tend to prevent different kernels from sharing the representation during weight sharing.
Destructive Weight Sharing. For easier discussion, let us first look at a special case, where all of the convolutional kernels have the same size and are used in the same shape in the corresponding convolution layers. In other words, {Ri} are square matrices, and the spatial sizes of all of the convolutional kernels have the same size, din × dout ×w× h, and the input channel dimension din is always equal to the output channel dimension dout. In this case, a filter f in a kernel can be treated as a vector in <dwh. Further, we choose Ri to be a block-diagonal matrix Ri = block-diag{Ai,Ai, . . . ,Ai}, where Ai ∈ O(dwh) is an orthogonal matrix that rotates each filter from the kernel Ki in the same fashion. Similar to the Proposition 2 in (Cheung et al., 2019), we show in the Appendix B that: if Ai, Aj are sampled from the O(M) Haar distribution and fi, fj are the same filter (generated by Ri, Rj respectively from W) from Ki, Kj respectively, then we have E [〈fi, fj〉] = 0 and E [ 〈 fi‖fi‖ , fj ‖fj‖ 〉 2 ] = 1M . Since M is usually large, the same filter from Ki, Kj are close to orthogonal and generally dissimilar. This shows that even when {Ki} are generated from the same W, they are not sharing the representation.
Even though {Ai} are not updated during the training, the size of Ai can be quite large in general. In practice, we can use permutation p ∈ P (M) and element-wise random sign reflection b ∈ B(M) to construct a subset of the orthogonal group O(M), i.e. we choose Ai ∈ {b ◦ p| b ∈ B(M), p ∈ P (M)}.2 Since pseudo-random numbers are used, it takes only two random seeds to store a random permutation and an element-wise random sign reflection.
In this we work, we generalize the usage of Ri beyond the block-diagonal generating matrices described above. {Ki} may have different sizes, which can be chosen even larger than the size of W. When Ki ∈ <Ni is larger than W ∈ <N , the corresponding generating matrix Ri is a tall matrix. There are many ways to efficiently create the generating matrices. We use random permutations P (Ni) and element-wise random sign reflections B(Ni) to create Ri:
Ri ∈ {b ◦ p| b ∈ B(Ni), p ∈ P (Ni)}, i = 1, . . . , L (3) {Ri} tend to lead to destructive weight sharing and lead to better utilization of the parameter capacity. Even Parameter Distribution for Different Layers. While it is easy to randomly sample elements from the W when generating parameters for each layer, it may not be optimal as some elements in
1In this paper, we treat each convolutional kernel as a vector. When the kernel is used to do the convolution, it will be reshaped into the corresponding shape.
2Permutations and element-wise random sign reflection conceptually are subgroups from the orthogonal group, but we shall never use them in the matrix form for the obvious efficiency purpose.
W may never be used, and some elements may be used more than average. We use an equalization technique to guarantee all elements of W are evenly sampled. Suppose the size of the W is N , and the total size of parameters of layers to be generated is M , M > N . We first generate b∗cMN arrays {x|x = 1, .., N} and concatenate them with (M mod N) elements randomly sampled from array {x|x = 1, ..,M}. We call the concatenated array index array u ∈ <M . We randomly shuffle all elements in u. When initializing each layer’s parameter, we sequentially get indices of chosen elements from the shuffled index array u. In this way, each layer’s parameters are randomly and evenly sampled from W. We refer to W as model rings since elements are recurrently used in a loop. For data saving efficiency, we just need to save two random seed numbers (one for sampling (M mod N) elements and one for shuffling) instead of saving the large index array u.
Batch Normalization. Model performance is relatively sensitive to the batch normalization parameters. For better performance, each of the convolution layers needs to have its own batch normalization parameters. In general, however, the size of batch normalization is relatively negligible. Yet when W is extremely small (e.g., 36K parameters), the size of batch normalization should be considered.
4 RECURRENT PARAMETER GENERATOR AT MULTIPLE SCALES
In the previous section, we discuss the general idea of superposition where only one RPG is created and shared globally across all layers. We could also create several local RPGs, and each of them is shared at certain scales, such as blocks and sub-networks. Such super-positions may be useful for certain applications such as recurrent modeling.
RPGs at Block-Level. Researchers propose network architectures that reuse the same design of network blocks multiple times for higher learning capacity, as discussed in the related work. Instead of using one global RPG for the entire network, we could alternatively create several RPGs that are shared within certain network blocks. We take ResNet18 (He et al., 2016) as a concrete example (Fig.3). ResNet18 has four building blocks. Every block has 2 residual convolution modules. To superpose ResNet18 at block scale, we create four local RPGs. Each RPG is shared within the corresponding building block, where the size of the RPG is flexible and can be determined by users.
RPGs at Sub-Network-Level. Reusing sub-networks, or recurrent networks have achieved success in many tasks as they iteratively refine and improves the prediction. Usually, weights are shared when reusing the sub-networks. This may not be optimal as sub-networks at different stages iteratively improve the prediction, and shared weights may limit the learning capacity to adapt for different stages. On the other hand, not sharing weights at all greatly increases the model size. We superpose different sub-networks with one or more RPGs. Superposition sub-networks could have a much smaller model size, while parameters for different sub-networks undergo destructive changes instead of directly copy-paste. We show applications of superpose sub-networks for pose estimation and multitask regression (Section 5.3 and 5.4).
5 EXPERIMENTAL RESULTS
We evaluate the performance of RPG with various tasks illustrated in Fig.2. We refer to model DoF as number of parameters or parameter size for convenience, although their differences have been discussed in Introduction. For classification, RPG was used for the entire network except for the last fully connected (fc) layer. Thus, we discuss reduction in backbone parameters. For example, Res18 has 11M backbone parameters and 512K fc parameters, and RPG was applied to reduce 11M backbone parameters only. Experiments are conducted on NVIDIA GeForce GTX 2080Ti GPUs.
5.1 CIFAR CLASSIFICATION
Implementation Details. All CIFAR experiments use batchsize of 128, weight decay of 5e-4, and initial learning rate of 0.1 with gamma of 0.1 at epoch 60, 120 and 160. We use Kaiming initialization (He et al., 2015) with adaptive scaling. Specifically, shared parameters are initialized with a particular variance and scale the parameters for each layer to make it match the Kaiming initialization.
Compared to Deep Equilibrium Models. As a representative of implicit models, deep equilibrium models (Bai et al., 2019) can reduce model redundancy by finding fix points via additional optimizations. We compare the image classification accuracy on CIFAR10 and CIFAR100 datasets, as well as the inference time on CIFAR100 (Table 1). Following the settings of MDEQ (Bai et al., 2020), an image was sequentially fed into the initial convolutional block, the multi-scale deep equilibrium block (dubbed as MS block), and the classification head. MDEQ (Bai et al., 2020) achieves infinite MS blocks by finding the fixed point of the MS block. We reuse the MS block two to four times without increasing the number of parameters. Our RPG achieves 3.4% - 5.8% gain on CIFAR10 and 3% - 5.9% gain on CIFAR100. Our inference time is 15 - 25 times smaller than MDEQ since MDEQ needs additional time to solve equilibrium during training.
Global RPG with Varying # Parameters. We create one global RPG to generate parameters for convolution layers of ResNet and refer to it as ResNet-RPG. We report CIFAR100 top-1 accuracy of ResNet-RPG18 and ResNet-RPG34 at different number of parameters (Fig.4 and Table 3). Compared to ResNet, ResNet-RPG achieves higher accuracy at the same parameter size. Specifically, we achieve 36% CIFAR100 accuracy with only 8K backbone parameters. Furthermore, ResNet34-RPG achieves higher accuracy than ResNet18-RPG, indicating increasing time complexity gives performance gain.
Local RPGs at the Block-Level. In the previous ResNet-RPG experiments, we use one global RPG (Fig.3-Upper).We also evaluate the performance when RPGs are shared locally at a block level, as shown in Fig.3-Lower. In Table 2, compared to plain ResNet18 at the same number of parameters, our block-level RPG network gives 1.0% gain. In contrast, our ResNet-RPG (parameters are evenly distributed) gives a 1.4% gain. Using one global RPG where parameters of each layer are evenly distributed is 0.4% higher than multiple RPGs.
Comparison to Baselines. Table 2 compares RPG and other baseline parameter reduction methods including random weight sharing, weight sharing with the hashing trick (Chen et al., 2015) and weight sharing with Lego filters Yang et al. (2019). At the same number of parameters, our RPG outperforms all other baselines, demonstrating the effectiveness of the proposed method.
5.2 IMAGENET CLASSIFICATION
Implementation Details. All ImageNet experiments use batch size of 256, weight decay of 3e-5, and an initial learning rate of 0.3 with gamma of 0.1 every 75 epochs and 225 epochs in total. Our schedule is different from the standard schedule as the weight-sharing mechanism requires different training dynamics. We tried a few settings and found this one to be the best for RPG.
RPG with Varying # Parameters. We use one RPG with different number of parameters for ResNet and report the top-1 accuracy (Table 3 and Fig.1(Right)). ResNet-RPGs consistently achieve higher performance compared to ResNets under the same number of parameters. Specifically, ResNetRPG34 achieves the same accuracy 73.4% as ResNet34 with only half of ResNet34 backbone parameters. ResNet-RPG18 also achieves the same accuracy as ResNet18 with only half of ResNet18 backbone parameters. Further, we find RPG networks have higher generalizability (Section 5.6).
Power Law. Empirically, accuracy and number of parameters follow a power law, when RPG model size is lower than 50% original plain ResNet model size. The exponents of the power laws are the same for ResNet18-RPG and ResNet34-RPG on ImageNet, when comparing with ResNet34 accuracies. The scaling law may be useful for estimating the network performance without training the network. Similarly, (Henighan et al., 2020) also identifies a power law for performance and model size of transformers. The proposed RPG enables under-parameterized models for large-scale datasets such as ImageNet, which may unleash more new studies and findings.
5.3 POSE ESTIMATION
Implementation Details. We superpose sub-networks for pose estimation with a globally shared RPG. We use hourglass networks (Newell et al., 2016) as the building backbone. The input image is first fed to an initial convolution block to obtain a feature map. The feature map is then fed to multiple stacked pose estimation sub-networks. Each sub-network outputs a pose estimation prediction, which is penalized by the pose estimation loss. Convolutional pose machine (CPM) (Wei et al., 2016) share all the weights for different sub-networks. We create one global RPG and generate parameters for each sub-network. Our model size is set to be the same as CPM. We also compare with larger models where parameters of sub-networks are not shared.
We evaluate on MPII Human Pose dataset (Andriluka et al., 2014), a benchmark for articulated human pose estimation, which consists of over 28K training samples over 40K people with annotated body joints. We use the hourglass network (Newell et al., 2016) as backbone and follow all their settings.
Results and Analysis. We report the Percentage of Correct Key-points at 50% threshold (PCK@0.5) of different methods in Table 4. CPM (Wei et al., 2016) share all parameters for different sub-
networks. We use one RPG that is shared globally at the same size as CPM. For reference, we also compare with the no-sharing model as the performance ceiling. Adding the number of recurrences leads to performance gain for all methods. At the same model size, RPG achieves higher PCK@0.5 compared to CPM. Increasing the number of parameters by not sharing sub-network parameters also leads to some performance gain.
5.4 MULTI-TASK REGRESSION
Implementation Details. We superpose sub-networks for multi-task regression with multiple RPGs at the building-block level. We focus on predicting depth and normal maps from a given image. We stack multiple SharpNet (Ramamonjisoa & Lepetit, 2019), a network for monocular depth and normal estimation. Specifically, we create multiple RPGs at the SharpNet building-block level. That is, parameters of corresponding blocks of different sub-networks are generated from the same RPG.
We evaluate the monocular depth and normal prediction performance on Standford 3D indoor scene dataset (Armeni et al., 2017). It contains over 70K images with corresponding depths and normals covering over 6,000 m2 indoor area. We follow all settings of SharpNet (Ramamonjisoa & Lepetit, 2019), a state-of-the-art monocular depth and normal estimation network.
Results and Analysis. We report the mean square errors for depth and normal estimation in Table 5. Compared to one-time inference without recurrence, our RPG network gives 3% and 2% gain for depth and normal estimation, respectively. Directly sharing weights but using new batch normalization layers decrease the performance by 1.2% and 0.3% for depth and normal. Sharing weights and normalization layers further decrease the performance by 0.7% and 0.9% for depth and normal.
5.5 PRUNING RPG
Fine-Grained Pruning. Fine-grained pruning methods aim at reducing the model parameters by sparsifying weight matrices. Such methods usually do not reduce the inference speed, although custom algorithms (Gale et al., 2020) may improve the speed. At the same number of parameters, RPG outperforms state-of-the-art fine-grained pruning method IMP (Frankle et al., 2019). Accuracy drops of RPG and IMP are similar, both around 2% (Table 6). It is worth noting that IMP could have faster inference speed with sparse GPU kernels (Gale et al., 2020).
Coarse-Grained Pruning. While RPG is not designed to reduce FLOPs, it can be combined with coarse-grained pruning to reduce FLOPs. We prune RPG filters with lowest `1 norms. Table 7 shows that the pruned RPG achieves on-par performance as state-of-the-art coarse-grained pruning method Knapsack (Aflalo et al., 2020) at the same FLOPs.
5.6 ANALYSIS
Comparison to Pruning Methods. We report our ResNet18-RPG performance with different number of parameters on ImageNet and some baseline pruning methods in Fig.1(Right). Our RPG networks outperform SOTA pruning methods such as (Aflalo et al., 2020; Dong & Yang, 2019; He et al., 2019; 2018; Dong et al., 2017; Khetan & Karnin, 2020). Specifically, at the same number of parameters, our RPG network has 0.6% gain over the knapsack pruning (Aflalo et al., 2020), a method that achieves the best ImageNet pruning accuracy.
Generalizability. We report the performance gap between training and validation set on ImageNet (Table 8(a)) and MPII pose estimation (Table 8(b)). CPM (Wei et al., 2016) serves as the baseline pose estimation method. RPG models consistently achieve lower gaps between training and validation set, indicating the RPG model suffers less from over-fitting.
We also report the out-of-distribution performance of RPG models. ObjectNet (Barbu et al., 2019) contain 50k images with 113 classes overlapping with ImageNet. Previous models are reported to
have a large performance drop on ObjectNet. We directly evaluate the performance of ImageNettrained model on ObjectNet without any fine-tuning (Table 8(c)). With the same number of backbone parameters, our ResNet-RPG achieves a 3.1% gain compared to ResNet18. With the same network architecture design, our ResNet-RPG achieves 0.5% gain compared to ResNet34. This indicates our RPG networks have higher out-of-distribution performance even with smaller model sizes.
Quantization. Network quantization can reduce model size with minimal accuracy drop. It is of interest to study if RPG models, whose parameters have been shrunk, can be quantized. After 8-bit quantization, the accuracy of ResNet18-RPG (5.6M parameters) only drop 0.1 percentage point on ImageNet, indicating RPG can be quantized for further size reduction. Details are in Appendix A.
Security. Permutation matrices generated by the random seed can be considered as security keys to decode the model. In addition, only random seeds to generate transformation matrices need to be saved and transferred, which is efficient in terms of size.
5.7 ABLATION STUDIES
We conduct ablation studies to understand the functions of permutation and reflection matrices (Fig.5). We evaluate ResNet-RPG34 with 2M backbone parameters. With permutation and reflection matrices leads to 76.5% accuracy, with permutation matrices only leads to 75.8%, with reflection matrices only leads to 71.1%, and with no permutation and reflection matrices leads to 70.7%. This suggests permutation and reflection matrices are useful for RPGs.
6 DISCUSSION
The common practice in machine learning is to search for the best model in large model space with many parameters or degrees of freedom (DoF), and then shrink the optimal model for deployment. Our key insight is that a direct and opposite approach might work better: We start from a lean model with a small DoF, which can be unpacked into a large model with many parameters. Then we can let the gradient descent automatically find the best model under this DoF constraint. Our work is a departure from mainstream approaches towards model optimization and parameter reduction. We show how the model DoF and actual parameter size can be decoupled: We can define a large model of arbitrary number of parameters with a small DoF.
We limit our scope to linear destructive weight sharing for different convolutional layers. However, in general, there might also exist nonlinear RPGs and efficient nonlinear generation functions to create convolutional kernels from a shared model ring W. Further, although RPG focuses on reducing model DoF, it can be quantized and pruned to further reduce the FLOPs and run time.
To sum up, we develop an efficient approach to build an arbitrarily complex neural network with any amount of DoF via a recurrent parameter generator. On a wide range of applications, including image classification, pose estimation and multitask regression, we show RPG networks consistently achieve higher performance at the same model size. Further, analysis shows that such networks are less possible to overfit and have higher performance on out-of-distribution data.
RPG can be added to any existing network flexibly with any amount of DoF at the user’s discretion. It provides new perspectives for recurrent models, equilibrium models, and model compression. It also serves as a tool for understanding the relationship between network properties and network DoF by factoring out the network architecture.
Reproducibility: We provide our code in supplementary materials. | 1. What is the main contribution of the paper regarding parameter reduction in neural networks?
2. What are the strengths of the proposed technique, particularly in its simplicity and effectiveness?
3. What are the weaknesses of the paper, such as the lack of discussion on training convergence and the unclear plot illustrations?
4. How does the proposed technique compare to other approaches in terms of structured parameter sharing, and what are the implications of using different permutations of weights?
5. Could the authors provide further insights into why the proposed method works effectively, especially regarding the role of random sign flipping and even weight assignment? | Summary Of The Paper
Review | Summary Of The Paper
The paper presents a technique for parameter reduction in neural networks. It follows the motivation of parameter redundancy in over-parameterized networks and aims to parameterize large networks with a smaller and shared set of weights. More specifically, a set of parameters based on a predetermined reduction factor is randomly assigned to various layers of a network and trained jointly. It is a simple technique that can be applied to any network architecture as the parameter generation is decoupled from the underlying architecture. The proposed method is evaluated in several tasks from classification to segmentation by using different network architectures. The experiments show improved or competitive performance when trained with similar or fewer parameters compared to the baselines.
Review
Pros: The proposed technique is simple yet effective. The evaluations show that it can be applied to different types of network architectures and in different tasks. I particularly find the generalization performance valuable. While the proposed model could be expected to achieve a lower gap between the training and the validation performance due to the smaller DoF, the out-of-distribution performance improves as well. The proposed technique also achieves better performance by using permutations of the weights in the subsequent iterations, if the base architecture follows an iterative refinement by passing the output of a layer/sub-network to itself. In my opinion, this could be a more interesting contribution claim than the parameter reduction. The proposed work enables a more efficient parameterization by reusing a smaller set of weights. It reduces the storage space. However, the actual model size and computational complexity remain the same.
Cons and questions:
Could the authors clarify the difference between the proposed method with the random weight assignment? If my understanding is correct, the permutation P(M) results in a similar random assignment effect. The “even sampling” and the random sign reflections seem to be the main difference in this case. If so, why the proposed technique works significantly better than the random assignment? Unfortunately, this comparison is available only in Tab. 2 (i.e., “Res34-random weight share”). If the permutation P(M) takes away any structure in the parameter selection matrices R_i, is the model diagram in Fig.3 (upper) accurate anymore?
This is a follow-up to my previous comment. In the pose estimation task (section 5.3), the CPM shares weights for the sub-networks and the network size remains constant as the number of sub-networks increases. RPG assigns the same weights to these sub-networks but shuffles them. Could it be concluded that using a different permutation of the same weights in the subsequent iterations improves the performance? Could the authors clarify and comment on this?
Although parameter reduction is one of the major contribution claims, only in one experiment (Tab. 2), parameter reduction baselines (i.e., -hash and -lego entries) are used.
It is not discussed in the paper. How does the proposed reduction technique affect the training convergence?
The plot in Fig.1 is not clear. Only the top-right corner of the plot could be illustrated. Similarly, Fig. 4 is hard to interpret with the caption.
I could not get the contribution claim in the security paragraph in section 5.6. Doesn’t it hold for any network?
The following work could be discussed in the related work: Dehghani, Mostafa, et al. "Universal transformers." arXiv preprint arXiv:1807.03819 (ICLR 19’).
-- Post-rebuttal update --
I thank the authors for their clarifications and additional experiments. After reading other reviews and responses, I decided to keep my score. As stressed by the authors, I agree that the contribution of this paper is not limited to compression. However, the experiments focus on the evaluation of the proposed method mainly in the parameter reduction tasks rather than providing insights on why and how it works. The authors addressed our concerns, yet it is still unclear why random sign flipping and even weight assignment are so effective. In my opinion, a comprehensive comparison of the proposed destructive weight-sharing approach with different concepts could improve the paper's contribution. For example, the RPG at block or sub-network levels introduces some locality. The "LegoNet" paper proposes a more structured parameter sharing approach. Comparing various techniques from fully random to slightly structured could be more insightful. However, I still find the performance in the generalization and the OOD experiments and the applicability of the proposed approach on different architectures valuable. Hence, this paper can be considered for acceptance. |
ICLR | Title
Recurrent Parameter Generators
Abstract
We present a generic method for recurrently using the same parameters for many different convolution layers to build a deep network. Specifically, for a network, we create a recurrent parameter generator (RPG), from which the parameters of each convolution layer are generated. Though using recurrent models to build a deep convolutional neural network (CNN) is not entirely new, our method achieves significant performance gain compared to the existing works. We demonstrate how to build a one-layer-size neural network to achieve similar performance compared to other traditional CNN models on various applications and datasets. We use the RPG to build a ResNet18 network with the number of weights equivalent to one convolutional layer of a conventional ResNet and show this model can achieve 67.2% ImageNet top-1 accuracy. Additionally, such a method allows us to build an arbitrarily complex neural network with any amount of parameters. For example, we build a ResNet34 with model parameters reduced by more than 400 times, which still achieves 41.6% ImageNet top-1 accuracy. Furthermore, the RPG can be further pruned and quantized for better run-time performance in addition to the model size reduction. We provide a new perspective for model compression. Rather than shrinking parameters from a large model, RPG sets a certain parametersize constraint and uses the gradient descent algorithm to automatically find the best model under the constraint. Extensive experiment results are provided to demonstrate the power of the proposed recurrent parameter generator.
1 INTRODUCTION
Deep learning has achieved great success with increasingly more training data and deeper & larger neural networks: A recently developed NLP model, GPT-3 (Brown et al., 2020), has astonishingly 175 billion parameters! While the model performance generally scales with the number of parameters (Henighan et al., 2020), with parameters outnumbering training data, the model is significantly over-parameterized. Tremendous effort has been made to reduce the parameter redundancy from different perspectives, including neural network pruning (LeCun et al., 1990; Han et al., 2016; Liu et al., 2018), efficient network design spaces (Howard et al., 2017; Iandola et al., 2016; Sandler et al., 2018), parameter regularization (Wan et al., 2013; Wang et al., 2020a; Srivastava et al., 2014), model quantization (Hubara et al., 2017; Rastegari et al., 2016; Louizos et al., 2019), neural architecture search (Zoph & Le, 2017; Cai et al., 2018; Wan et al., 2020), recurrent models (Bai et al., 2019; 2020; Wei et al., 2016), multi-task feature encoding (Ramamonjisoa & Lepetit, 2019; Hao et al., 2021), etc.
One of the most prominent approaches in this direction is the pruning-based model compression, which dates back to the late 80s or early 90s (Mozer & Smolensky, 1989; LeCun et al., 1990) and has enjoyed a resurgence (Han et al., 2016; Blalock et al., 2020) recently. These pruning methods seek to remove the unimportant parameters from a pre-trained large neural network and can frequently achieve an enormous model-compression ratio.
Though sharing a similar motivation to reduce the parameter redundancy, we explore an entirely different territory of model parameter reduction: rather than compressing a large model, we define an arbitrarily large model based on a fixed set of parameters to maximize the model capacity. In this work, we propose to define many different layers in a deep neural network based on a fixed amount of parameters, which we call recurrent parameter generator (RPG). That is, we differentiate the number of model parameters and degrees of freedom (DoF). Traditionally, model parameters are treated independently of each other; the total number of parameters is the number of DoF. However, by tapping into how a core set of free parameters can be assigned to the neural network model, we can develop a large model of many parameters with a small degree of freedom. In other words,
there is excess capacity in neural network models independent of how and where the parameters are used in the network. Even at the level of individual scalar values, parameters can be reused in another arbitrary location of the deep network architecture without significantly impacting model performance. Surprisingly, backpropagation training of a deep network is able to cope with that the same parameter can be assigned to multiple random locations in the network without significantly impacting model performance. Through extensive experiments, we show that a large neural network does not need to be over overparameterized to achieve competitive performance. Particularly, a ResNet18 can be implemented with the number of weights equivalent to one convolution layer in a conventional ResNet (4.72× parameter reduction) and still achieve 67.2% ImageNet top-1 accuracy. The proposed method is also extremely flexible in reducing the model parameters. In some sense, the proposed RPG method can be viewed as an automatic model parameter reduction technique, which explores the optimal accuracy-parameter trade-off. When we reduce the model parameter, RPG shows graceful performance degradation, and its compression results are frequently on par with the SOTA pruning methods besides the flexibility. Even if we reduce the ResNet18 backbone parameters to 36K, which is about 300× reduction, ResNet18 can still achieve 40.0% ImageNet top-1 accuracy. Notably, we choose a destructive parameter sharing method (Cheung et al., 2019) for RPG in this work, which discourages any potential representation sharing from layer to layer. Compared to other recurrent weight-sharing methods, e.g., convolutional pose machine (CPM) or multi-scale deep equilibrium models (MDEQ), our method achieves competitive performance on various benchmarks. Further, we show RPG can be quantized and pruned to improve FLOPs and run time with very tiny accuracy drops. This makes RPG a strong and practical baseline for probing whether there is nontrivial representation sharing within any recurrent network.
To summarize, we make the following contributions: 1. This work provides a new perspective towards automatic model parameter reduction: we can
define a neural network with certain DoF constraint and let gradient descent optimization automatically find the best model under the desired constraint.
2. We propose the recurrent parameter generator (RPG), which decouples the network architecture and the network DoF. Given a certain neural network architecture, we can flexibly choose any DoF to construct the network.
3. By separating the network architecture from the parameter generator, RPG becomes a tool for us to understand the relationship between the model DoF and the network performance. We observe an empirical log-linear DoF-Accuracy relationship.
2 RELATED WORK
There are many important efforts to compress neural networks or to reduce the redundancy in neural network parameters. We discuss each of the approaches and their relationships to our work.
Model Pruning, Neural Architecture Search, and Quantization. Model pruning seeks to remove the unimportant parameters in a trained model. Recently, it’s proposed to use neural architecture search as a coarse-grained model pruning (Yu et al., 2018; Dong & Yang, 2019). Another related effort is neural network quantization (Hubara et al., 2017; Rastegari et al., 2016; Louizos et al., 2019), which seeks to reduce the bits used for each parameter and can frequently reduce the model size by 4× with minimal accuracy drop. More recently, Dollár et al. (2021) presents a framework for analyzing model scaling strategies that considers network properties such as FLOPs and activations.
Parameter Regularization and Priors. Another highly related direction is parameter regularization. Regularization has been widely used to reduce model redundancy (Krogh & Hertz, 1992), alleviate model overfitting (Srivastava et al., 2014; Wan et al., 2013), and ensure desired mathematical regularity (Wang et al., 2020a). RPG can be viewed as a parameter regularization in the sense that weight sharing poses many equality constraints to weights and regularizes weights to a low-dimensional space. HyperNeat (Stanley et al., 2009) and CPPNs (Stanley, 2007) use networks to determine the weight between two neurons as a function of their positions. Karaletsos et al. (2018) and Karaletsos & Bui (2020) introduced a similar idea by providing a hierarchical prior for network parameters.
Recurrent Networks and Deep Equilibrium Models. Recurrence and feedback have been shown in psychology and neuroscience to act as modulators or competitive inhibitors to aid feature grouping (Gilbert & Sigman, 2007), figure-ground segregation (Hupé et al., 1998) and object recognition (Wyatte et al., 2012). Recurrence-inspired mechanisms also achieve success in feed-forward models. There are two main types of employing recurrence based on if weights are shared across recurrent modules. ResNet (He et al., 2016), a representative of reusing similar structures without weight sharing, introduces parallel residual connections and achieves better performance by going deeper in networks. Similarly, some works (Szegedy et al., 2015; Srivastava et al., 2015) also suggest iteratively injecting thus-far representations to the feed-forward network useful. Stacked inference methods (Ramakrishna et al., 2014; Wolpert, 1992; Weiss & Taskar, 2010) are also related while they consider each output in isolation. Several works find sharing weights across recurrent modules beneficial. They demonstrate applications in temporal modelling (Weiss & Taskar, 2010; Xingjian et al., 2015; Karpathy & Fei-Fei, 2015), spatial attention (Mnih et al., 2014; Butko & Movellan, 2009), pose estimation (Wei et al., 2016; Carreira et al., 2016), and so on (Li et al., 2016; Zamir et al., 2017). Such methods usually shine in modeling long-term dependencies. In this work, we recurrently share weights across different layers of a feedback network to reduce network redundancy.
Given stacking weight-shared modules improve the performance, researchers consider running even infinite depth of such modules by making the sequential modules converge to a fixed point (LeCun et al., 1988; Bai et al., 2019). Employing such equilibrium models to existing networks, they show improved performance in many natural language processing (Bai et al., 2019) and computer vision tasks (Bai et al., 2020; Wang et al., 2020b). One issue with deep equilibrium models is that the forward and backward propagation usually takes much more iterations than explicit feed-forward networks. Some work (Fung et al., 2021) improves the efficiency by making the backward propagation Jacobian free. Another issue is that infinite depth and fixed point may not be necessary or even too strict for some tasks. Instead of achieving infinite depth, our model shares parameters to a certain level. We empirically compare with equilibrium models in Section 5.
Efficient Network Space and Matrix Factorization. Convolution is an efficient and structured matrix-vector multiplication. Arguably, the most fundamental idea in building efficient linear systems is matrix factorization. Given the redundancy in deep convolutional neural network parameters, one can leverage the matrix factorization concept, e.g., factorized convolutions, and design more efficient network classes (Howard et al., 2017; Iandola et al., 2016; Tan & Le, 2019; Sandler et al., 2018).
3 RECURRENT PARAMETER GENERATOR
We define recurrent parameter generators and show a certain kind of generating matrices that leads to destructive weight sharing. For better parameter capacity, we introduce an even sampling strategy.
Recurrent Parameter Generator. Assuming we are constructing a deep convolutional neural network, which contains L different convolution layers. Let K1,K2, . . . ,KL be the corresponding L convolutional kernels 1. Rather than using separate sets of parameters for different convolution layers, we create a single set of parameters W ∈ <N and use it to generate the corresponding parameters for each convolution layer: Ki = Ri ·W, i ∈ {1, . . . , L} (1) where Ri is a fixed predefined generating matrix, which is used to generate Ki from W. We call {Ri} and W the recurrent parameter generator (RPG). In this work, we always assume that the size of W is smaller than the total parameters of the model, i.e., |W| ≤ ∑ i |Ki|. This means an element of W will generally be used in more than one layer of a neural network. Additionally, the gradient of W is a linear superposition of the gradients from each convolution layer. During the neural network training, let’s assume convolution kernel Ki receives gradient ∂`∂Ki , where ` is the loss function. Based on the chain rule, it is clear that the gradient of W is:
∂`
∂W = L∑ i=1 RTi · ∂` ∂Ki (2)
Generating Matrices. There are many different ways to create the generating matrices {Ri}. In this work, we primarily explore the destructive generating matrices, which tend to prevent different kernels from sharing the representation during weight sharing.
Destructive Weight Sharing. For easier discussion, let us first look at a special case, where all of the convolutional kernels have the same size and are used in the same shape in the corresponding convolution layers. In other words, {Ri} are square matrices, and the spatial sizes of all of the convolutional kernels have the same size, din × dout ×w× h, and the input channel dimension din is always equal to the output channel dimension dout. In this case, a filter f in a kernel can be treated as a vector in <dwh. Further, we choose Ri to be a block-diagonal matrix Ri = block-diag{Ai,Ai, . . . ,Ai}, where Ai ∈ O(dwh) is an orthogonal matrix that rotates each filter from the kernel Ki in the same fashion. Similar to the Proposition 2 in (Cheung et al., 2019), we show in the Appendix B that: if Ai, Aj are sampled from the O(M) Haar distribution and fi, fj are the same filter (generated by Ri, Rj respectively from W) from Ki, Kj respectively, then we have E [〈fi, fj〉] = 0 and E [ 〈 fi‖fi‖ , fj ‖fj‖ 〉 2 ] = 1M . Since M is usually large, the same filter from Ki, Kj are close to orthogonal and generally dissimilar. This shows that even when {Ki} are generated from the same W, they are not sharing the representation.
Even though {Ai} are not updated during the training, the size of Ai can be quite large in general. In practice, we can use permutation p ∈ P (M) and element-wise random sign reflection b ∈ B(M) to construct a subset of the orthogonal group O(M), i.e. we choose Ai ∈ {b ◦ p| b ∈ B(M), p ∈ P (M)}.2 Since pseudo-random numbers are used, it takes only two random seeds to store a random permutation and an element-wise random sign reflection.
In this we work, we generalize the usage of Ri beyond the block-diagonal generating matrices described above. {Ki} may have different sizes, which can be chosen even larger than the size of W. When Ki ∈ <Ni is larger than W ∈ <N , the corresponding generating matrix Ri is a tall matrix. There are many ways to efficiently create the generating matrices. We use random permutations P (Ni) and element-wise random sign reflections B(Ni) to create Ri:
Ri ∈ {b ◦ p| b ∈ B(Ni), p ∈ P (Ni)}, i = 1, . . . , L (3) {Ri} tend to lead to destructive weight sharing and lead to better utilization of the parameter capacity. Even Parameter Distribution for Different Layers. While it is easy to randomly sample elements from the W when generating parameters for each layer, it may not be optimal as some elements in
1In this paper, we treat each convolutional kernel as a vector. When the kernel is used to do the convolution, it will be reshaped into the corresponding shape.
2Permutations and element-wise random sign reflection conceptually are subgroups from the orthogonal group, but we shall never use them in the matrix form for the obvious efficiency purpose.
W may never be used, and some elements may be used more than average. We use an equalization technique to guarantee all elements of W are evenly sampled. Suppose the size of the W is N , and the total size of parameters of layers to be generated is M , M > N . We first generate b∗cMN arrays {x|x = 1, .., N} and concatenate them with (M mod N) elements randomly sampled from array {x|x = 1, ..,M}. We call the concatenated array index array u ∈ <M . We randomly shuffle all elements in u. When initializing each layer’s parameter, we sequentially get indices of chosen elements from the shuffled index array u. In this way, each layer’s parameters are randomly and evenly sampled from W. We refer to W as model rings since elements are recurrently used in a loop. For data saving efficiency, we just need to save two random seed numbers (one for sampling (M mod N) elements and one for shuffling) instead of saving the large index array u.
Batch Normalization. Model performance is relatively sensitive to the batch normalization parameters. For better performance, each of the convolution layers needs to have its own batch normalization parameters. In general, however, the size of batch normalization is relatively negligible. Yet when W is extremely small (e.g., 36K parameters), the size of batch normalization should be considered.
4 RECURRENT PARAMETER GENERATOR AT MULTIPLE SCALES
In the previous section, we discuss the general idea of superposition where only one RPG is created and shared globally across all layers. We could also create several local RPGs, and each of them is shared at certain scales, such as blocks and sub-networks. Such super-positions may be useful for certain applications such as recurrent modeling.
RPGs at Block-Level. Researchers propose network architectures that reuse the same design of network blocks multiple times for higher learning capacity, as discussed in the related work. Instead of using one global RPG for the entire network, we could alternatively create several RPGs that are shared within certain network blocks. We take ResNet18 (He et al., 2016) as a concrete example (Fig.3). ResNet18 has four building blocks. Every block has 2 residual convolution modules. To superpose ResNet18 at block scale, we create four local RPGs. Each RPG is shared within the corresponding building block, where the size of the RPG is flexible and can be determined by users.
RPGs at Sub-Network-Level. Reusing sub-networks, or recurrent networks have achieved success in many tasks as they iteratively refine and improves the prediction. Usually, weights are shared when reusing the sub-networks. This may not be optimal as sub-networks at different stages iteratively improve the prediction, and shared weights may limit the learning capacity to adapt for different stages. On the other hand, not sharing weights at all greatly increases the model size. We superpose different sub-networks with one or more RPGs. Superposition sub-networks could have a much smaller model size, while parameters for different sub-networks undergo destructive changes instead of directly copy-paste. We show applications of superpose sub-networks for pose estimation and multitask regression (Section 5.3 and 5.4).
5 EXPERIMENTAL RESULTS
We evaluate the performance of RPG with various tasks illustrated in Fig.2. We refer to model DoF as number of parameters or parameter size for convenience, although their differences have been discussed in Introduction. For classification, RPG was used for the entire network except for the last fully connected (fc) layer. Thus, we discuss reduction in backbone parameters. For example, Res18 has 11M backbone parameters and 512K fc parameters, and RPG was applied to reduce 11M backbone parameters only. Experiments are conducted on NVIDIA GeForce GTX 2080Ti GPUs.
5.1 CIFAR CLASSIFICATION
Implementation Details. All CIFAR experiments use batchsize of 128, weight decay of 5e-4, and initial learning rate of 0.1 with gamma of 0.1 at epoch 60, 120 and 160. We use Kaiming initialization (He et al., 2015) with adaptive scaling. Specifically, shared parameters are initialized with a particular variance and scale the parameters for each layer to make it match the Kaiming initialization.
Compared to Deep Equilibrium Models. As a representative of implicit models, deep equilibrium models (Bai et al., 2019) can reduce model redundancy by finding fix points via additional optimizations. We compare the image classification accuracy on CIFAR10 and CIFAR100 datasets, as well as the inference time on CIFAR100 (Table 1). Following the settings of MDEQ (Bai et al., 2020), an image was sequentially fed into the initial convolutional block, the multi-scale deep equilibrium block (dubbed as MS block), and the classification head. MDEQ (Bai et al., 2020) achieves infinite MS blocks by finding the fixed point of the MS block. We reuse the MS block two to four times without increasing the number of parameters. Our RPG achieves 3.4% - 5.8% gain on CIFAR10 and 3% - 5.9% gain on CIFAR100. Our inference time is 15 - 25 times smaller than MDEQ since MDEQ needs additional time to solve equilibrium during training.
Global RPG with Varying # Parameters. We create one global RPG to generate parameters for convolution layers of ResNet and refer to it as ResNet-RPG. We report CIFAR100 top-1 accuracy of ResNet-RPG18 and ResNet-RPG34 at different number of parameters (Fig.4 and Table 3). Compared to ResNet, ResNet-RPG achieves higher accuracy at the same parameter size. Specifically, we achieve 36% CIFAR100 accuracy with only 8K backbone parameters. Furthermore, ResNet34-RPG achieves higher accuracy than ResNet18-RPG, indicating increasing time complexity gives performance gain.
Local RPGs at the Block-Level. In the previous ResNet-RPG experiments, we use one global RPG (Fig.3-Upper).We also evaluate the performance when RPGs are shared locally at a block level, as shown in Fig.3-Lower. In Table 2, compared to plain ResNet18 at the same number of parameters, our block-level RPG network gives 1.0% gain. In contrast, our ResNet-RPG (parameters are evenly distributed) gives a 1.4% gain. Using one global RPG where parameters of each layer are evenly distributed is 0.4% higher than multiple RPGs.
Comparison to Baselines. Table 2 compares RPG and other baseline parameter reduction methods including random weight sharing, weight sharing with the hashing trick (Chen et al., 2015) and weight sharing with Lego filters Yang et al. (2019). At the same number of parameters, our RPG outperforms all other baselines, demonstrating the effectiveness of the proposed method.
5.2 IMAGENET CLASSIFICATION
Implementation Details. All ImageNet experiments use batch size of 256, weight decay of 3e-5, and an initial learning rate of 0.3 with gamma of 0.1 every 75 epochs and 225 epochs in total. Our schedule is different from the standard schedule as the weight-sharing mechanism requires different training dynamics. We tried a few settings and found this one to be the best for RPG.
RPG with Varying # Parameters. We use one RPG with different number of parameters for ResNet and report the top-1 accuracy (Table 3 and Fig.1(Right)). ResNet-RPGs consistently achieve higher performance compared to ResNets under the same number of parameters. Specifically, ResNetRPG34 achieves the same accuracy 73.4% as ResNet34 with only half of ResNet34 backbone parameters. ResNet-RPG18 also achieves the same accuracy as ResNet18 with only half of ResNet18 backbone parameters. Further, we find RPG networks have higher generalizability (Section 5.6).
Power Law. Empirically, accuracy and number of parameters follow a power law, when RPG model size is lower than 50% original plain ResNet model size. The exponents of the power laws are the same for ResNet18-RPG and ResNet34-RPG on ImageNet, when comparing with ResNet34 accuracies. The scaling law may be useful for estimating the network performance without training the network. Similarly, (Henighan et al., 2020) also identifies a power law for performance and model size of transformers. The proposed RPG enables under-parameterized models for large-scale datasets such as ImageNet, which may unleash more new studies and findings.
5.3 POSE ESTIMATION
Implementation Details. We superpose sub-networks for pose estimation with a globally shared RPG. We use hourglass networks (Newell et al., 2016) as the building backbone. The input image is first fed to an initial convolution block to obtain a feature map. The feature map is then fed to multiple stacked pose estimation sub-networks. Each sub-network outputs a pose estimation prediction, which is penalized by the pose estimation loss. Convolutional pose machine (CPM) (Wei et al., 2016) share all the weights for different sub-networks. We create one global RPG and generate parameters for each sub-network. Our model size is set to be the same as CPM. We also compare with larger models where parameters of sub-networks are not shared.
We evaluate on MPII Human Pose dataset (Andriluka et al., 2014), a benchmark for articulated human pose estimation, which consists of over 28K training samples over 40K people with annotated body joints. We use the hourglass network (Newell et al., 2016) as backbone and follow all their settings.
Results and Analysis. We report the Percentage of Correct Key-points at 50% threshold (PCK@0.5) of different methods in Table 4. CPM (Wei et al., 2016) share all parameters for different sub-
networks. We use one RPG that is shared globally at the same size as CPM. For reference, we also compare with the no-sharing model as the performance ceiling. Adding the number of recurrences leads to performance gain for all methods. At the same model size, RPG achieves higher PCK@0.5 compared to CPM. Increasing the number of parameters by not sharing sub-network parameters also leads to some performance gain.
5.4 MULTI-TASK REGRESSION
Implementation Details. We superpose sub-networks for multi-task regression with multiple RPGs at the building-block level. We focus on predicting depth and normal maps from a given image. We stack multiple SharpNet (Ramamonjisoa & Lepetit, 2019), a network for monocular depth and normal estimation. Specifically, we create multiple RPGs at the SharpNet building-block level. That is, parameters of corresponding blocks of different sub-networks are generated from the same RPG.
We evaluate the monocular depth and normal prediction performance on Standford 3D indoor scene dataset (Armeni et al., 2017). It contains over 70K images with corresponding depths and normals covering over 6,000 m2 indoor area. We follow all settings of SharpNet (Ramamonjisoa & Lepetit, 2019), a state-of-the-art monocular depth and normal estimation network.
Results and Analysis. We report the mean square errors for depth and normal estimation in Table 5. Compared to one-time inference without recurrence, our RPG network gives 3% and 2% gain for depth and normal estimation, respectively. Directly sharing weights but using new batch normalization layers decrease the performance by 1.2% and 0.3% for depth and normal. Sharing weights and normalization layers further decrease the performance by 0.7% and 0.9% for depth and normal.
5.5 PRUNING RPG
Fine-Grained Pruning. Fine-grained pruning methods aim at reducing the model parameters by sparsifying weight matrices. Such methods usually do not reduce the inference speed, although custom algorithms (Gale et al., 2020) may improve the speed. At the same number of parameters, RPG outperforms state-of-the-art fine-grained pruning method IMP (Frankle et al., 2019). Accuracy drops of RPG and IMP are similar, both around 2% (Table 6). It is worth noting that IMP could have faster inference speed with sparse GPU kernels (Gale et al., 2020).
Coarse-Grained Pruning. While RPG is not designed to reduce FLOPs, it can be combined with coarse-grained pruning to reduce FLOPs. We prune RPG filters with lowest `1 norms. Table 7 shows that the pruned RPG achieves on-par performance as state-of-the-art coarse-grained pruning method Knapsack (Aflalo et al., 2020) at the same FLOPs.
5.6 ANALYSIS
Comparison to Pruning Methods. We report our ResNet18-RPG performance with different number of parameters on ImageNet and some baseline pruning methods in Fig.1(Right). Our RPG networks outperform SOTA pruning methods such as (Aflalo et al., 2020; Dong & Yang, 2019; He et al., 2019; 2018; Dong et al., 2017; Khetan & Karnin, 2020). Specifically, at the same number of parameters, our RPG network has 0.6% gain over the knapsack pruning (Aflalo et al., 2020), a method that achieves the best ImageNet pruning accuracy.
Generalizability. We report the performance gap between training and validation set on ImageNet (Table 8(a)) and MPII pose estimation (Table 8(b)). CPM (Wei et al., 2016) serves as the baseline pose estimation method. RPG models consistently achieve lower gaps between training and validation set, indicating the RPG model suffers less from over-fitting.
We also report the out-of-distribution performance of RPG models. ObjectNet (Barbu et al., 2019) contain 50k images with 113 classes overlapping with ImageNet. Previous models are reported to
have a large performance drop on ObjectNet. We directly evaluate the performance of ImageNettrained model on ObjectNet without any fine-tuning (Table 8(c)). With the same number of backbone parameters, our ResNet-RPG achieves a 3.1% gain compared to ResNet18. With the same network architecture design, our ResNet-RPG achieves 0.5% gain compared to ResNet34. This indicates our RPG networks have higher out-of-distribution performance even with smaller model sizes.
Quantization. Network quantization can reduce model size with minimal accuracy drop. It is of interest to study if RPG models, whose parameters have been shrunk, can be quantized. After 8-bit quantization, the accuracy of ResNet18-RPG (5.6M parameters) only drop 0.1 percentage point on ImageNet, indicating RPG can be quantized for further size reduction. Details are in Appendix A.
Security. Permutation matrices generated by the random seed can be considered as security keys to decode the model. In addition, only random seeds to generate transformation matrices need to be saved and transferred, which is efficient in terms of size.
5.7 ABLATION STUDIES
We conduct ablation studies to understand the functions of permutation and reflection matrices (Fig.5). We evaluate ResNet-RPG34 with 2M backbone parameters. With permutation and reflection matrices leads to 76.5% accuracy, with permutation matrices only leads to 75.8%, with reflection matrices only leads to 71.1%, and with no permutation and reflection matrices leads to 70.7%. This suggests permutation and reflection matrices are useful for RPGs.
6 DISCUSSION
The common practice in machine learning is to search for the best model in large model space with many parameters or degrees of freedom (DoF), and then shrink the optimal model for deployment. Our key insight is that a direct and opposite approach might work better: We start from a lean model with a small DoF, which can be unpacked into a large model with many parameters. Then we can let the gradient descent automatically find the best model under this DoF constraint. Our work is a departure from mainstream approaches towards model optimization and parameter reduction. We show how the model DoF and actual parameter size can be decoupled: We can define a large model of arbitrary number of parameters with a small DoF.
We limit our scope to linear destructive weight sharing for different convolutional layers. However, in general, there might also exist nonlinear RPGs and efficient nonlinear generation functions to create convolutional kernels from a shared model ring W. Further, although RPG focuses on reducing model DoF, it can be quantized and pruned to further reduce the FLOPs and run time.
To sum up, we develop an efficient approach to build an arbitrarily complex neural network with any amount of DoF via a recurrent parameter generator. On a wide range of applications, including image classification, pose estimation and multitask regression, we show RPG networks consistently achieve higher performance at the same model size. Further, analysis shows that such networks are less possible to overfit and have higher performance on out-of-distribution data.
RPG can be added to any existing network flexibly with any amount of DoF at the user’s discretion. It provides new perspectives for recurrent models, equilibrium models, and model compression. It also serves as a tool for understanding the relationship between network properties and network DoF by factoring out the network architecture.
Reproducibility: We provide our code in supplementary materials. | 1. What is the main contribution of the paper regarding parameter generation for neural networks?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of practicality and computational complexity?
3. How does the reviewer assess the novelty and effectiveness of the method compared to other works, including network pruning techniques?
4. What are the concerns regarding the additional computation cost and the impact on inference time?
5. How does the reviewer evaluate the significance and usefulness of the observations made in the paper, such as the power law and the outcome of RPG models?
6. Are there any suggestions for future studies or improvements to the proposed method, such as applying it to vision transformers or comparing it with state-of-the-art network pruning methods? | Summary Of The Paper
Review | Summary Of The Paper
This paper uses parameter generation to generate parameters for different hidden layers in neural networks. There are several significant strengths and weaknesses in this paper. Especially, the practicability of the proposed method is questionable. It would be good to see my detailed comments below.
Review
(Neutral) The parameters of each convolution layer are generated by the generator. Could the authors discuss it with "Parameter prediction for unseen deep architecture"?
(Positive) Though the novelty of this paper is limited, the proposed method achieves significant performance gain compared to the existing works.
(Negative) My main concern is that although the proposed method can build an arbitrarily complex neural network with any amount of parameters, the computational complexity is not reduced. It would be good for the authors to rethink the practical use of the proposed method. For example, although the parameters are saved, the FLOPS are not reduced. To some extent, the proposed method is related to network pruning. But all in all, the proposed technique is far less practical than network pruning methods.
(Negative) I understand that the proposed method shares some merits of convolution in weight sharing. But sharing weights in spatial is different from sharing weights in depths. Sharing weights in the spatial can (i) save parameters, (ii) can ease optimization, and (iii) can accelerate the computation by using matrix multiplication. But sharing weights in depths (i) will hurt model performances and (ii) cannot reduce computational cost. Therefore, the proposed method might not be able to be regarded as "Efficient."
(Positive) Equation (2) is elegant and precise.
(Positive) The following property is elegant: "Since M is usually large, the same filter from Ki, Kj are close to orthogonal and generally dissimilar."
(Negative) Due to the existence of matrix {A_i}, there would be additional computation cost in the generator, which further increases the computation complexity. In other words, although the parameters are saved, the computational cost might increase.
(Positive) It is reasonable to have a separate BN. It is reasonable to RPGs at Block-Level and RPGs at Sub-Network-Level.
(Negative) It would be good if the authors could apply the proposed method to the recent ViTs. Actually, based on my rich experience, vision transformers might have a more severe problem in parameter redundancy in the axis of depth. Maybe the proposed method can do well in ViTs. It would be nice if the authors could provide such studies to prove the generability of the proposed method.
(Negative) The inference time should also be added to Table 2, Table 3, and Figure 4 to show whether the proposed method is at a disadvantage.
(Positive) The observation of the Power Law is insightful, valuable, and useful, which benefits the community.
(Negative) From Table 7, we can see that even combined with the network pruning method, the proposed method does not hold an advantage, not to mention that SOTA network pruning methods are not compared with.
(Negative) Although the paper compares the proposed method with network pruning methods in Fig. 1 (right), the comparison is quite strange. Most typical network pruning methods focus on FLOPS pruning or the combination of FLOPs and parameters, but not merely parameters.
(Positive) The following result is valuable: "RPG models consistently achieve lower gaps between training and validation set, indicating the RPG model suffers less from over-fitting." The result on OOD data is also promising and valuable. It would be better to have an explanation. |
ICLR | Title
Recurrent Parameter Generators
Abstract
We present a generic method for recurrently using the same parameters for many different convolution layers to build a deep network. Specifically, for a network, we create a recurrent parameter generator (RPG), from which the parameters of each convolution layer are generated. Though using recurrent models to build a deep convolutional neural network (CNN) is not entirely new, our method achieves significant performance gain compared to the existing works. We demonstrate how to build a one-layer-size neural network to achieve similar performance compared to other traditional CNN models on various applications and datasets. We use the RPG to build a ResNet18 network with the number of weights equivalent to one convolutional layer of a conventional ResNet and show this model can achieve 67.2% ImageNet top-1 accuracy. Additionally, such a method allows us to build an arbitrarily complex neural network with any amount of parameters. For example, we build a ResNet34 with model parameters reduced by more than 400 times, which still achieves 41.6% ImageNet top-1 accuracy. Furthermore, the RPG can be further pruned and quantized for better run-time performance in addition to the model size reduction. We provide a new perspective for model compression. Rather than shrinking parameters from a large model, RPG sets a certain parametersize constraint and uses the gradient descent algorithm to automatically find the best model under the constraint. Extensive experiment results are provided to demonstrate the power of the proposed recurrent parameter generator.
1 INTRODUCTION
Deep learning has achieved great success with increasingly more training data and deeper & larger neural networks: A recently developed NLP model, GPT-3 (Brown et al., 2020), has astonishingly 175 billion parameters! While the model performance generally scales with the number of parameters (Henighan et al., 2020), with parameters outnumbering training data, the model is significantly over-parameterized. Tremendous effort has been made to reduce the parameter redundancy from different perspectives, including neural network pruning (LeCun et al., 1990; Han et al., 2016; Liu et al., 2018), efficient network design spaces (Howard et al., 2017; Iandola et al., 2016; Sandler et al., 2018), parameter regularization (Wan et al., 2013; Wang et al., 2020a; Srivastava et al., 2014), model quantization (Hubara et al., 2017; Rastegari et al., 2016; Louizos et al., 2019), neural architecture search (Zoph & Le, 2017; Cai et al., 2018; Wan et al., 2020), recurrent models (Bai et al., 2019; 2020; Wei et al., 2016), multi-task feature encoding (Ramamonjisoa & Lepetit, 2019; Hao et al., 2021), etc.
One of the most prominent approaches in this direction is the pruning-based model compression, which dates back to the late 80s or early 90s (Mozer & Smolensky, 1989; LeCun et al., 1990) and has enjoyed a resurgence (Han et al., 2016; Blalock et al., 2020) recently. These pruning methods seek to remove the unimportant parameters from a pre-trained large neural network and can frequently achieve an enormous model-compression ratio.
Though sharing a similar motivation to reduce the parameter redundancy, we explore an entirely different territory of model parameter reduction: rather than compressing a large model, we define an arbitrarily large model based on a fixed set of parameters to maximize the model capacity. In this work, we propose to define many different layers in a deep neural network based on a fixed amount of parameters, which we call recurrent parameter generator (RPG). That is, we differentiate the number of model parameters and degrees of freedom (DoF). Traditionally, model parameters are treated independently of each other; the total number of parameters is the number of DoF. However, by tapping into how a core set of free parameters can be assigned to the neural network model, we can develop a large model of many parameters with a small degree of freedom. In other words,
there is excess capacity in neural network models independent of how and where the parameters are used in the network. Even at the level of individual scalar values, parameters can be reused in another arbitrary location of the deep network architecture without significantly impacting model performance. Surprisingly, backpropagation training of a deep network is able to cope with that the same parameter can be assigned to multiple random locations in the network without significantly impacting model performance. Through extensive experiments, we show that a large neural network does not need to be over overparameterized to achieve competitive performance. Particularly, a ResNet18 can be implemented with the number of weights equivalent to one convolution layer in a conventional ResNet (4.72× parameter reduction) and still achieve 67.2% ImageNet top-1 accuracy. The proposed method is also extremely flexible in reducing the model parameters. In some sense, the proposed RPG method can be viewed as an automatic model parameter reduction technique, which explores the optimal accuracy-parameter trade-off. When we reduce the model parameter, RPG shows graceful performance degradation, and its compression results are frequently on par with the SOTA pruning methods besides the flexibility. Even if we reduce the ResNet18 backbone parameters to 36K, which is about 300× reduction, ResNet18 can still achieve 40.0% ImageNet top-1 accuracy. Notably, we choose a destructive parameter sharing method (Cheung et al., 2019) for RPG in this work, which discourages any potential representation sharing from layer to layer. Compared to other recurrent weight-sharing methods, e.g., convolutional pose machine (CPM) or multi-scale deep equilibrium models (MDEQ), our method achieves competitive performance on various benchmarks. Further, we show RPG can be quantized and pruned to improve FLOPs and run time with very tiny accuracy drops. This makes RPG a strong and practical baseline for probing whether there is nontrivial representation sharing within any recurrent network.
To summarize, we make the following contributions: 1. This work provides a new perspective towards automatic model parameter reduction: we can
define a neural network with certain DoF constraint and let gradient descent optimization automatically find the best model under the desired constraint.
2. We propose the recurrent parameter generator (RPG), which decouples the network architecture and the network DoF. Given a certain neural network architecture, we can flexibly choose any DoF to construct the network.
3. By separating the network architecture from the parameter generator, RPG becomes a tool for us to understand the relationship between the model DoF and the network performance. We observe an empirical log-linear DoF-Accuracy relationship.
2 RELATED WORK
There are many important efforts to compress neural networks or to reduce the redundancy in neural network parameters. We discuss each of the approaches and their relationships to our work.
Model Pruning, Neural Architecture Search, and Quantization. Model pruning seeks to remove the unimportant parameters in a trained model. Recently, it’s proposed to use neural architecture search as a coarse-grained model pruning (Yu et al., 2018; Dong & Yang, 2019). Another related effort is neural network quantization (Hubara et al., 2017; Rastegari et al., 2016; Louizos et al., 2019), which seeks to reduce the bits used for each parameter and can frequently reduce the model size by 4× with minimal accuracy drop. More recently, Dollár et al. (2021) presents a framework for analyzing model scaling strategies that considers network properties such as FLOPs and activations.
Parameter Regularization and Priors. Another highly related direction is parameter regularization. Regularization has been widely used to reduce model redundancy (Krogh & Hertz, 1992), alleviate model overfitting (Srivastava et al., 2014; Wan et al., 2013), and ensure desired mathematical regularity (Wang et al., 2020a). RPG can be viewed as a parameter regularization in the sense that weight sharing poses many equality constraints to weights and regularizes weights to a low-dimensional space. HyperNeat (Stanley et al., 2009) and CPPNs (Stanley, 2007) use networks to determine the weight between two neurons as a function of their positions. Karaletsos et al. (2018) and Karaletsos & Bui (2020) introduced a similar idea by providing a hierarchical prior for network parameters.
Recurrent Networks and Deep Equilibrium Models. Recurrence and feedback have been shown in psychology and neuroscience to act as modulators or competitive inhibitors to aid feature grouping (Gilbert & Sigman, 2007), figure-ground segregation (Hupé et al., 1998) and object recognition (Wyatte et al., 2012). Recurrence-inspired mechanisms also achieve success in feed-forward models. There are two main types of employing recurrence based on if weights are shared across recurrent modules. ResNet (He et al., 2016), a representative of reusing similar structures without weight sharing, introduces parallel residual connections and achieves better performance by going deeper in networks. Similarly, some works (Szegedy et al., 2015; Srivastava et al., 2015) also suggest iteratively injecting thus-far representations to the feed-forward network useful. Stacked inference methods (Ramakrishna et al., 2014; Wolpert, 1992; Weiss & Taskar, 2010) are also related while they consider each output in isolation. Several works find sharing weights across recurrent modules beneficial. They demonstrate applications in temporal modelling (Weiss & Taskar, 2010; Xingjian et al., 2015; Karpathy & Fei-Fei, 2015), spatial attention (Mnih et al., 2014; Butko & Movellan, 2009), pose estimation (Wei et al., 2016; Carreira et al., 2016), and so on (Li et al., 2016; Zamir et al., 2017). Such methods usually shine in modeling long-term dependencies. In this work, we recurrently share weights across different layers of a feedback network to reduce network redundancy.
Given stacking weight-shared modules improve the performance, researchers consider running even infinite depth of such modules by making the sequential modules converge to a fixed point (LeCun et al., 1988; Bai et al., 2019). Employing such equilibrium models to existing networks, they show improved performance in many natural language processing (Bai et al., 2019) and computer vision tasks (Bai et al., 2020; Wang et al., 2020b). One issue with deep equilibrium models is that the forward and backward propagation usually takes much more iterations than explicit feed-forward networks. Some work (Fung et al., 2021) improves the efficiency by making the backward propagation Jacobian free. Another issue is that infinite depth and fixed point may not be necessary or even too strict for some tasks. Instead of achieving infinite depth, our model shares parameters to a certain level. We empirically compare with equilibrium models in Section 5.
Efficient Network Space and Matrix Factorization. Convolution is an efficient and structured matrix-vector multiplication. Arguably, the most fundamental idea in building efficient linear systems is matrix factorization. Given the redundancy in deep convolutional neural network parameters, one can leverage the matrix factorization concept, e.g., factorized convolutions, and design more efficient network classes (Howard et al., 2017; Iandola et al., 2016; Tan & Le, 2019; Sandler et al., 2018).
3 RECURRENT PARAMETER GENERATOR
We define recurrent parameter generators and show a certain kind of generating matrices that leads to destructive weight sharing. For better parameter capacity, we introduce an even sampling strategy.
Recurrent Parameter Generator. Assuming we are constructing a deep convolutional neural network, which contains L different convolution layers. Let K1,K2, . . . ,KL be the corresponding L convolutional kernels 1. Rather than using separate sets of parameters for different convolution layers, we create a single set of parameters W ∈ <N and use it to generate the corresponding parameters for each convolution layer: Ki = Ri ·W, i ∈ {1, . . . , L} (1) where Ri is a fixed predefined generating matrix, which is used to generate Ki from W. We call {Ri} and W the recurrent parameter generator (RPG). In this work, we always assume that the size of W is smaller than the total parameters of the model, i.e., |W| ≤ ∑ i |Ki|. This means an element of W will generally be used in more than one layer of a neural network. Additionally, the gradient of W is a linear superposition of the gradients from each convolution layer. During the neural network training, let’s assume convolution kernel Ki receives gradient ∂`∂Ki , where ` is the loss function. Based on the chain rule, it is clear that the gradient of W is:
∂`
∂W = L∑ i=1 RTi · ∂` ∂Ki (2)
Generating Matrices. There are many different ways to create the generating matrices {Ri}. In this work, we primarily explore the destructive generating matrices, which tend to prevent different kernels from sharing the representation during weight sharing.
Destructive Weight Sharing. For easier discussion, let us first look at a special case, where all of the convolutional kernels have the same size and are used in the same shape in the corresponding convolution layers. In other words, {Ri} are square matrices, and the spatial sizes of all of the convolutional kernels have the same size, din × dout ×w× h, and the input channel dimension din is always equal to the output channel dimension dout. In this case, a filter f in a kernel can be treated as a vector in <dwh. Further, we choose Ri to be a block-diagonal matrix Ri = block-diag{Ai,Ai, . . . ,Ai}, where Ai ∈ O(dwh) is an orthogonal matrix that rotates each filter from the kernel Ki in the same fashion. Similar to the Proposition 2 in (Cheung et al., 2019), we show in the Appendix B that: if Ai, Aj are sampled from the O(M) Haar distribution and fi, fj are the same filter (generated by Ri, Rj respectively from W) from Ki, Kj respectively, then we have E [〈fi, fj〉] = 0 and E [ 〈 fi‖fi‖ , fj ‖fj‖ 〉 2 ] = 1M . Since M is usually large, the same filter from Ki, Kj are close to orthogonal and generally dissimilar. This shows that even when {Ki} are generated from the same W, they are not sharing the representation.
Even though {Ai} are not updated during the training, the size of Ai can be quite large in general. In practice, we can use permutation p ∈ P (M) and element-wise random sign reflection b ∈ B(M) to construct a subset of the orthogonal group O(M), i.e. we choose Ai ∈ {b ◦ p| b ∈ B(M), p ∈ P (M)}.2 Since pseudo-random numbers are used, it takes only two random seeds to store a random permutation and an element-wise random sign reflection.
In this we work, we generalize the usage of Ri beyond the block-diagonal generating matrices described above. {Ki} may have different sizes, which can be chosen even larger than the size of W. When Ki ∈ <Ni is larger than W ∈ <N , the corresponding generating matrix Ri is a tall matrix. There are many ways to efficiently create the generating matrices. We use random permutations P (Ni) and element-wise random sign reflections B(Ni) to create Ri:
Ri ∈ {b ◦ p| b ∈ B(Ni), p ∈ P (Ni)}, i = 1, . . . , L (3) {Ri} tend to lead to destructive weight sharing and lead to better utilization of the parameter capacity. Even Parameter Distribution for Different Layers. While it is easy to randomly sample elements from the W when generating parameters for each layer, it may not be optimal as some elements in
1In this paper, we treat each convolutional kernel as a vector. When the kernel is used to do the convolution, it will be reshaped into the corresponding shape.
2Permutations and element-wise random sign reflection conceptually are subgroups from the orthogonal group, but we shall never use them in the matrix form for the obvious efficiency purpose.
W may never be used, and some elements may be used more than average. We use an equalization technique to guarantee all elements of W are evenly sampled. Suppose the size of the W is N , and the total size of parameters of layers to be generated is M , M > N . We first generate b∗cMN arrays {x|x = 1, .., N} and concatenate them with (M mod N) elements randomly sampled from array {x|x = 1, ..,M}. We call the concatenated array index array u ∈ <M . We randomly shuffle all elements in u. When initializing each layer’s parameter, we sequentially get indices of chosen elements from the shuffled index array u. In this way, each layer’s parameters are randomly and evenly sampled from W. We refer to W as model rings since elements are recurrently used in a loop. For data saving efficiency, we just need to save two random seed numbers (one for sampling (M mod N) elements and one for shuffling) instead of saving the large index array u.
Batch Normalization. Model performance is relatively sensitive to the batch normalization parameters. For better performance, each of the convolution layers needs to have its own batch normalization parameters. In general, however, the size of batch normalization is relatively negligible. Yet when W is extremely small (e.g., 36K parameters), the size of batch normalization should be considered.
4 RECURRENT PARAMETER GENERATOR AT MULTIPLE SCALES
In the previous section, we discuss the general idea of superposition where only one RPG is created and shared globally across all layers. We could also create several local RPGs, and each of them is shared at certain scales, such as blocks and sub-networks. Such super-positions may be useful for certain applications such as recurrent modeling.
RPGs at Block-Level. Researchers propose network architectures that reuse the same design of network blocks multiple times for higher learning capacity, as discussed in the related work. Instead of using one global RPG for the entire network, we could alternatively create several RPGs that are shared within certain network blocks. We take ResNet18 (He et al., 2016) as a concrete example (Fig.3). ResNet18 has four building blocks. Every block has 2 residual convolution modules. To superpose ResNet18 at block scale, we create four local RPGs. Each RPG is shared within the corresponding building block, where the size of the RPG is flexible and can be determined by users.
RPGs at Sub-Network-Level. Reusing sub-networks, or recurrent networks have achieved success in many tasks as they iteratively refine and improves the prediction. Usually, weights are shared when reusing the sub-networks. This may not be optimal as sub-networks at different stages iteratively improve the prediction, and shared weights may limit the learning capacity to adapt for different stages. On the other hand, not sharing weights at all greatly increases the model size. We superpose different sub-networks with one or more RPGs. Superposition sub-networks could have a much smaller model size, while parameters for different sub-networks undergo destructive changes instead of directly copy-paste. We show applications of superpose sub-networks for pose estimation and multitask regression (Section 5.3 and 5.4).
5 EXPERIMENTAL RESULTS
We evaluate the performance of RPG with various tasks illustrated in Fig.2. We refer to model DoF as number of parameters or parameter size for convenience, although their differences have been discussed in Introduction. For classification, RPG was used for the entire network except for the last fully connected (fc) layer. Thus, we discuss reduction in backbone parameters. For example, Res18 has 11M backbone parameters and 512K fc parameters, and RPG was applied to reduce 11M backbone parameters only. Experiments are conducted on NVIDIA GeForce GTX 2080Ti GPUs.
5.1 CIFAR CLASSIFICATION
Implementation Details. All CIFAR experiments use batchsize of 128, weight decay of 5e-4, and initial learning rate of 0.1 with gamma of 0.1 at epoch 60, 120 and 160. We use Kaiming initialization (He et al., 2015) with adaptive scaling. Specifically, shared parameters are initialized with a particular variance and scale the parameters for each layer to make it match the Kaiming initialization.
Compared to Deep Equilibrium Models. As a representative of implicit models, deep equilibrium models (Bai et al., 2019) can reduce model redundancy by finding fix points via additional optimizations. We compare the image classification accuracy on CIFAR10 and CIFAR100 datasets, as well as the inference time on CIFAR100 (Table 1). Following the settings of MDEQ (Bai et al., 2020), an image was sequentially fed into the initial convolutional block, the multi-scale deep equilibrium block (dubbed as MS block), and the classification head. MDEQ (Bai et al., 2020) achieves infinite MS blocks by finding the fixed point of the MS block. We reuse the MS block two to four times without increasing the number of parameters. Our RPG achieves 3.4% - 5.8% gain on CIFAR10 and 3% - 5.9% gain on CIFAR100. Our inference time is 15 - 25 times smaller than MDEQ since MDEQ needs additional time to solve equilibrium during training.
Global RPG with Varying # Parameters. We create one global RPG to generate parameters for convolution layers of ResNet and refer to it as ResNet-RPG. We report CIFAR100 top-1 accuracy of ResNet-RPG18 and ResNet-RPG34 at different number of parameters (Fig.4 and Table 3). Compared to ResNet, ResNet-RPG achieves higher accuracy at the same parameter size. Specifically, we achieve 36% CIFAR100 accuracy with only 8K backbone parameters. Furthermore, ResNet34-RPG achieves higher accuracy than ResNet18-RPG, indicating increasing time complexity gives performance gain.
Local RPGs at the Block-Level. In the previous ResNet-RPG experiments, we use one global RPG (Fig.3-Upper).We also evaluate the performance when RPGs are shared locally at a block level, as shown in Fig.3-Lower. In Table 2, compared to plain ResNet18 at the same number of parameters, our block-level RPG network gives 1.0% gain. In contrast, our ResNet-RPG (parameters are evenly distributed) gives a 1.4% gain. Using one global RPG where parameters of each layer are evenly distributed is 0.4% higher than multiple RPGs.
Comparison to Baselines. Table 2 compares RPG and other baseline parameter reduction methods including random weight sharing, weight sharing with the hashing trick (Chen et al., 2015) and weight sharing with Lego filters Yang et al. (2019). At the same number of parameters, our RPG outperforms all other baselines, demonstrating the effectiveness of the proposed method.
5.2 IMAGENET CLASSIFICATION
Implementation Details. All ImageNet experiments use batch size of 256, weight decay of 3e-5, and an initial learning rate of 0.3 with gamma of 0.1 every 75 epochs and 225 epochs in total. Our schedule is different from the standard schedule as the weight-sharing mechanism requires different training dynamics. We tried a few settings and found this one to be the best for RPG.
RPG with Varying # Parameters. We use one RPG with different number of parameters for ResNet and report the top-1 accuracy (Table 3 and Fig.1(Right)). ResNet-RPGs consistently achieve higher performance compared to ResNets under the same number of parameters. Specifically, ResNetRPG34 achieves the same accuracy 73.4% as ResNet34 with only half of ResNet34 backbone parameters. ResNet-RPG18 also achieves the same accuracy as ResNet18 with only half of ResNet18 backbone parameters. Further, we find RPG networks have higher generalizability (Section 5.6).
Power Law. Empirically, accuracy and number of parameters follow a power law, when RPG model size is lower than 50% original plain ResNet model size. The exponents of the power laws are the same for ResNet18-RPG and ResNet34-RPG on ImageNet, when comparing with ResNet34 accuracies. The scaling law may be useful for estimating the network performance without training the network. Similarly, (Henighan et al., 2020) also identifies a power law for performance and model size of transformers. The proposed RPG enables under-parameterized models for large-scale datasets such as ImageNet, which may unleash more new studies and findings.
5.3 POSE ESTIMATION
Implementation Details. We superpose sub-networks for pose estimation with a globally shared RPG. We use hourglass networks (Newell et al., 2016) as the building backbone. The input image is first fed to an initial convolution block to obtain a feature map. The feature map is then fed to multiple stacked pose estimation sub-networks. Each sub-network outputs a pose estimation prediction, which is penalized by the pose estimation loss. Convolutional pose machine (CPM) (Wei et al., 2016) share all the weights for different sub-networks. We create one global RPG and generate parameters for each sub-network. Our model size is set to be the same as CPM. We also compare with larger models where parameters of sub-networks are not shared.
We evaluate on MPII Human Pose dataset (Andriluka et al., 2014), a benchmark for articulated human pose estimation, which consists of over 28K training samples over 40K people with annotated body joints. We use the hourglass network (Newell et al., 2016) as backbone and follow all their settings.
Results and Analysis. We report the Percentage of Correct Key-points at 50% threshold (PCK@0.5) of different methods in Table 4. CPM (Wei et al., 2016) share all parameters for different sub-
networks. We use one RPG that is shared globally at the same size as CPM. For reference, we also compare with the no-sharing model as the performance ceiling. Adding the number of recurrences leads to performance gain for all methods. At the same model size, RPG achieves higher PCK@0.5 compared to CPM. Increasing the number of parameters by not sharing sub-network parameters also leads to some performance gain.
5.4 MULTI-TASK REGRESSION
Implementation Details. We superpose sub-networks for multi-task regression with multiple RPGs at the building-block level. We focus on predicting depth and normal maps from a given image. We stack multiple SharpNet (Ramamonjisoa & Lepetit, 2019), a network for monocular depth and normal estimation. Specifically, we create multiple RPGs at the SharpNet building-block level. That is, parameters of corresponding blocks of different sub-networks are generated from the same RPG.
We evaluate the monocular depth and normal prediction performance on Standford 3D indoor scene dataset (Armeni et al., 2017). It contains over 70K images with corresponding depths and normals covering over 6,000 m2 indoor area. We follow all settings of SharpNet (Ramamonjisoa & Lepetit, 2019), a state-of-the-art monocular depth and normal estimation network.
Results and Analysis. We report the mean square errors for depth and normal estimation in Table 5. Compared to one-time inference without recurrence, our RPG network gives 3% and 2% gain for depth and normal estimation, respectively. Directly sharing weights but using new batch normalization layers decrease the performance by 1.2% and 0.3% for depth and normal. Sharing weights and normalization layers further decrease the performance by 0.7% and 0.9% for depth and normal.
5.5 PRUNING RPG
Fine-Grained Pruning. Fine-grained pruning methods aim at reducing the model parameters by sparsifying weight matrices. Such methods usually do not reduce the inference speed, although custom algorithms (Gale et al., 2020) may improve the speed. At the same number of parameters, RPG outperforms state-of-the-art fine-grained pruning method IMP (Frankle et al., 2019). Accuracy drops of RPG and IMP are similar, both around 2% (Table 6). It is worth noting that IMP could have faster inference speed with sparse GPU kernels (Gale et al., 2020).
Coarse-Grained Pruning. While RPG is not designed to reduce FLOPs, it can be combined with coarse-grained pruning to reduce FLOPs. We prune RPG filters with lowest `1 norms. Table 7 shows that the pruned RPG achieves on-par performance as state-of-the-art coarse-grained pruning method Knapsack (Aflalo et al., 2020) at the same FLOPs.
5.6 ANALYSIS
Comparison to Pruning Methods. We report our ResNet18-RPG performance with different number of parameters on ImageNet and some baseline pruning methods in Fig.1(Right). Our RPG networks outperform SOTA pruning methods such as (Aflalo et al., 2020; Dong & Yang, 2019; He et al., 2019; 2018; Dong et al., 2017; Khetan & Karnin, 2020). Specifically, at the same number of parameters, our RPG network has 0.6% gain over the knapsack pruning (Aflalo et al., 2020), a method that achieves the best ImageNet pruning accuracy.
Generalizability. We report the performance gap between training and validation set on ImageNet (Table 8(a)) and MPII pose estimation (Table 8(b)). CPM (Wei et al., 2016) serves as the baseline pose estimation method. RPG models consistently achieve lower gaps between training and validation set, indicating the RPG model suffers less from over-fitting.
We also report the out-of-distribution performance of RPG models. ObjectNet (Barbu et al., 2019) contain 50k images with 113 classes overlapping with ImageNet. Previous models are reported to
have a large performance drop on ObjectNet. We directly evaluate the performance of ImageNettrained model on ObjectNet without any fine-tuning (Table 8(c)). With the same number of backbone parameters, our ResNet-RPG achieves a 3.1% gain compared to ResNet18. With the same network architecture design, our ResNet-RPG achieves 0.5% gain compared to ResNet34. This indicates our RPG networks have higher out-of-distribution performance even with smaller model sizes.
Quantization. Network quantization can reduce model size with minimal accuracy drop. It is of interest to study if RPG models, whose parameters have been shrunk, can be quantized. After 8-bit quantization, the accuracy of ResNet18-RPG (5.6M parameters) only drop 0.1 percentage point on ImageNet, indicating RPG can be quantized for further size reduction. Details are in Appendix A.
Security. Permutation matrices generated by the random seed can be considered as security keys to decode the model. In addition, only random seeds to generate transformation matrices need to be saved and transferred, which is efficient in terms of size.
5.7 ABLATION STUDIES
We conduct ablation studies to understand the functions of permutation and reflection matrices (Fig.5). We evaluate ResNet-RPG34 with 2M backbone parameters. With permutation and reflection matrices leads to 76.5% accuracy, with permutation matrices only leads to 75.8%, with reflection matrices only leads to 71.1%, and with no permutation and reflection matrices leads to 70.7%. This suggests permutation and reflection matrices are useful for RPGs.
6 DISCUSSION
The common practice in machine learning is to search for the best model in large model space with many parameters or degrees of freedom (DoF), and then shrink the optimal model for deployment. Our key insight is that a direct and opposite approach might work better: We start from a lean model with a small DoF, which can be unpacked into a large model with many parameters. Then we can let the gradient descent automatically find the best model under this DoF constraint. Our work is a departure from mainstream approaches towards model optimization and parameter reduction. We show how the model DoF and actual parameter size can be decoupled: We can define a large model of arbitrary number of parameters with a small DoF.
We limit our scope to linear destructive weight sharing for different convolutional layers. However, in general, there might also exist nonlinear RPGs and efficient nonlinear generation functions to create convolutional kernels from a shared model ring W. Further, although RPG focuses on reducing model DoF, it can be quantized and pruned to further reduce the FLOPs and run time.
To sum up, we develop an efficient approach to build an arbitrarily complex neural network with any amount of DoF via a recurrent parameter generator. On a wide range of applications, including image classification, pose estimation and multitask regression, we show RPG networks consistently achieve higher performance at the same model size. Further, analysis shows that such networks are less possible to overfit and have higher performance on out-of-distribution data.
RPG can be added to any existing network flexibly with any amount of DoF at the user’s discretion. It provides new perspectives for recurrent models, equilibrium models, and model compression. It also serves as a tool for understanding the relationship between network properties and network DoF by factoring out the network architecture.
Reproducibility: We provide our code in supplementary materials. | 1. What is the focus of the paper on reducing the size of deep models?
2. What are the strengths of the proposed approach, particularly regarding weight sharing?
3. What are the limitations of the method compared to other compression techniques?
4. Do you have any concerns about the applicability of the method in specific scenarios?
5. How does the reviewer assess the clarity and comparisons made in the paper? | Summary Of The Paper
Review | Summary Of The Paper
In order to reduce size of deep models, this work propose one parameter sharing method for different convolutional layers. With the help of one sharing set of parameters, all convolutional kernels can be generated from the sharing parameters. They show the effeciveness of this method in classification, pose estimation tasks.
Review
Strengths:
This paper proposes a method of weight sharing. The authors show that re-utilization of parameters generated by their recurrent parameter generator introduces diversity among kernel parameters within a single model. By reusing weights, the model size is reduced greatly.
Weaknesses:
This method is weak in creativity. In contrast to conventional vector quantization or just quantization methods, the RPG is not better.
In Section.3, they refer to the batch normalization. In the stage of inference for normal CNNs, we usually fuse the convolutional layer and batch normalization layer to one conv layer. However, the proposed method is not easy to do this. If use fusion method, extra computations will be executed.
In the paper "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding", the compression ratio is neraly 2%. I can not see any comparison in the experiments.
The description of "Destructive Weight Sharing" is not easy to understand. |
ICLR | Title
Neuro-Symbolic Procedural Planning with Commonsense Prompting
Abstract
Procedural planning aims to implement complex high-level goals by decomposition into simpler low-level steps. Although procedural planning is a basic skill set for humans in daily life, it remains a challenge for large language models (LLMs) that lack a deep understanding of the cause-effect relations in procedures. Previous methods require manual exemplars to acquire procedural knowledge from LLMs in the zero-shot setting. However, such elicited pre-trained knowledge in LLMs induces spurious correlations between goals and steps, impairing the model’s generalization to unseen tasks. In contrast, this paper proposes a neuro-symbolic procedural PLANner (PLAN) that elicits procedural knowledge from the LLMs with commonsense-infused prompting. To mitigate spurious goal-step correlations, we use symbolic program executors on the latent procedural representations to formalize prompts from external knowledge bases as a causal intervention toward the Structural Causal Model of procedural planning. Both automatic and human evaluations on WikiHow and RobotHow show the superiority of PLAN on procedural planning without further training or manual exemplars.1
1 INTRODUCTION
How to make a cup of coffee? As humans, we can easily specify a procedure to solve this task, using our innate ability of commonsense reasoning. However, can we endow machines with the same ability to construct a sequential plan? As depicted in Figure 1, procedural planning (Pearson, 1996; Zhang et al., 2020b; Huang et al., 2022) aims to decompose a high-level goal (Task: Watch TV) into a sequence of temporally extended steps (Procedural Plan: Step at all five time-steps).
We study procedural planning as the conditional text generation problem since it resembles real-world scenarios. Previous approaches (Huang et al., 2022; Ahn et al., 2022) require a small number of carefully written or held-out exemplars to acquire procedural knowledge. However, these manual exemplars evolved from task data are impossible to cover the ever-changing task setups and the flexible dependency relations among goals and steps. In fact, the biased data may cause the model to learn spurious correlations and hinder the model from generalizing well in zero-shot scenarios. Studies in cognitive science show that humans rely on chunking mechanisms (Gobet et al., 2001; Miller, 1956) which turn primitive stimuli into conceptual groups to solve novel and complex problems. Inspired by this, we hypothesize that generalizable procedural planning ability can be achieved by learning cause-effect relations among complex goals and simpler steps using external knowledge.
To reveal the cause-effect relations in procedural planning, we devise a Structural Causal Model (SCM) (Peters et al., 2017), a directed acyclic graph commonly used to describe the causal relationships within a system Pearl (2009). As depicted in Figure 2, the pre-trained knowledge (D) (e.g., TV and living room is highly correlated) in LLMs confounds (D influences T , Si−1 and Si, resulting in spurious correlations) the system to make biased decisions toward an unreasonable step (e.g., Find
1Source code and datasets are publicly available at https://sites.google.com/view/iclr-clap
Television). Thus, we adopt front-door adjustment (definition in Appendix A.3), which utilizes a mediator (Pi) that blocks all directed paths from the cause (T or Si−1) to the effect (Si). In this way, T (or Si−1) affects Si by flowing through indirect paths: T (or Si−1) affects Pi and Pi affects Si. And we can identify the causal effects among goals and steps by investigating the indirect effect (Equation 3), which is computed by multiplying the effect of T (or Si−1) on Pi−1 (Equation 1) with the effect of Pi on Si (Equation 2). With the above front-door adjustment, we can mitigate the spurious correlations (e.g., between ”television” and ”living room”) and thus make reasonable decisions on steps (e.g., Find book). Please refer to A.1 for causal preliminaries (including explanation for SCM, confounder, mediator, spurious correlations), and A.3 for the front-door adjustment definition.
Guided by the above causal analysis of procedural planning, we need to construct the mediator Pi and then intervene on task T and prompt Pi, which is required to compute the conditional probability in Equation3. As depicted in Figure 3, we seek to automatically construct commonsense-infused prompts as the mediator Pi by concatenating the task, previous steps with commonsense knowledge extracted from external resources (e.g., ConceptNet (Speer et al., 2017)). First, we modify the goal input by sampling a task-relevant knowledge subgraph (Stage1 in Section 3.1) to implement interventions on T . Then, we modify the prompt by adapting the edge weight to implement interventions on Pi (Edge-Wise Adoption of Stage2 in Section 3.1). However, directly incorporating knowledge of graph structure into LLMs leads to the loss of the logical order in eliciting procedural knowledge from LLMs. Thus, we apply symbolic executors (Mao et al., 2019; Yi et al., 2018) that execute the sequential mapping program on latent knowledge representations (e.g., the subevent of). In this way, we transit graph structure knowledge into natural language that preserves procedural structure, such as the sequential order of two low-level steps (Symbolic Structuring of Stage2 in Section 3.1). The procedural prompt PG (e.g, “please get the remote control”) is further translated into admissible one P̂G (e.g., “grab remote control”) from available steps in a certain domain (RobotHow or WikiHow in our case). Finally, we utilize the commonsense-infused prompt P̂G to control the generation of procedural plans in LLMs in a zero-shot setting (Section 3.2).
We conducted experiments on RobotHow (Puig et al., 2018) and WikiHow (Koupaee & Wang, 2018) under original and counterfactual situations. Our major contributions can be summarized as:
• We develop the first causal framework for procedural planning by 1) defining a temporally extended Structural Causal Model and 2) resolving spurious correlation between high-level goals and low-level steps via front-door adjustment with a prompt-based mediator. • We propose a neuro-symbolic approach to construct commonsense-infused prompts for LLMs to tackle the procedural planning task without manual exemplars or further training. • Extensive evaluations show the superiority of PLAN in terms of reasoning about the causeeffect relations among goals and steps and achieving promising planning ability.
2 EXTERNAL KNOWLEDGE MATTERS IN PROCEDURAL PLANNING
As depicted in Figure 1, procedural planning requires generating the Plan (e.g., Step 1: Walk to the living room.) conditioned on the Task (e.g., Watch TV). We first describe the problem definition
and then show why external knowledge matters in procedural planning through the lens of causality. Finally, we show how we elicit procedural ability from the Large Language Models (LLMs).
2.1 PROBLEM DEFINITION
Given the high-level task T (e.g. watch television in the living room) sampled from a task domain MT (e.g. RobotHow), a procedural planner aims to decompose it into lower-level temporally extended steps ST = {S1, ..., Si|Si ∈ S̄}. There exists certain admissible plans S̄, which is a fixed set constrained by the task domain MT (e.g., the affordance of the interacted objects). The plan Si at timestep i is generated as π(Si|T, S0:i−1).
2.2 A CAUSAL LOOK AT PROCEDURE PLANNING WITH LLMS
We seek to empower the LLMs with the ability to reason cause-effect relations in procedural planning. Thus, we devise a causal framework by first defining a Structural Causal Model (SCM) of procedural planning in Figure 2. The SCM describes the temporal dynamics and procedural cause-effect relationship. Our causal assumption in SCM indicates that there is a backdoor path from task to step, which must be blocked with front-door adjustment. Therefore, we model the input prompt as a mediator which is created from external knowledge. More specifically, we define our Full Temporal Causal Graph as in Figure 2a, which is an unrolled Structural Causal Model (SCM) for sequential decision-making. Our goal is to identify the causal relations between the attended task T and plan procedures ST = {S1, S2, . . .} from LLMs. Initially, there are direct paths T → Si and Sk → Si, k < i because Si relies on the LLM attended task entities and previous accomplished steps. D is an unobserved confounder from learned knowledge during pre-training. D builds a backdoor path between T and Si and misguides the LLMs to attend to false entities to generate the next step (see Fig. 2b). Note that D is unobservable as we directly adopt the LLM without knowing the pre-training data. To mitigate the spurious correlation, we then introduce a mediator Pi for each Si as shown in Figure 2a. To achieve our front-door adjustment, we inject external knowledge into LLMs with a neuro-symbolic approach by adopting three stages described in Section 3.1.
3 OUR APPROACH
Although LLMs have strong general language intelligence, they still perform poorly in reasoning the cause-effect relations in procedural plans due to a lack of daily life experience. We propose to elicit the unbiased procedural planning knowledge from the LLMs using the created commonsense-infused Prompt P as π(Si|T, S0:i−1, P ). Figure 3 and Algorithm 1 depict how PLAN tackles the procedural
planning in a five-stage manner. We illustrate the commonsense-infused prompt construction (the first three stages) in Section 3.1 and planning with LLMs (the last stage) in Section 3.2.
3.1 COMMONSENSE-INFUSED PROMPT CONSTRUCTION
Overview Inspired by the causal analysis in Section 2.2, we propose to construct commonsenseinfused Prompt P that helps reveal the cause-effect relations among the goals and steps during procedural planning within 3 stages: 1) Stage1 sample a subgraph Gs from the external knowledge base G by extracting task(T )-relevant nodes. 2) Stage2 adapt the edge weight Ew in Gs and apply symbolic structuring to get the admissible knowledge prompt P̂G. 3) Stage3 acquire the temporal order by temporally aggregated the prompt Pi with previous steps S0:i−1.
Stage1:Task-Relevant Knowledge Subgraph Sampling First, we investigate the causal effect T → Pi and Si−1 → Pi (Figure 2). Si is a collider that blocks the association between D and Pi in the path T ← D → Si ← Pi. Let πi denote π(·|Pi−1) that represent the probability density function conditioned on Pi−1. Since there is no backdoor path for T → Pi and similarly for Si−1 → Pi, we simply have the conditional probability after applying do-operators:
πi(Pi = p|do(T )) = πi(Pi = p|T ), πi(Pi = p|do(Si−1)) = πi(Pi = p|Si−1) (1)
We achieve the do-operation in a prompting way by modifying the goal input so that the model attends to the task-relevant entities. To implement, we use NLTK to tokenize and pos tag the task text T . Then we use the noun (e.g. television), noun phrases (e.g. remote control), and verb phrases (e.g. watch television) as the concept node. In this way, the task name T is Semantically Parsed into the Concept Set TE . Each concept e ∈ TE is used as a query for sampling the H-hop task-relevant subgraph Gs ⊆ Ne×Rs×Ne from the external knowledge base G ⊆ N ×R×N , whereN andR represent the number of concept nodes and commonsense relations respectively. When extracting Gs, we keep the triplets with relation type in the household domain (e.g., AtLocation, UsedFor) and filter out ones in the linguistic domain (e.g., DistinctFrom, DerivedFrom) for the procedural planning task. Ne is maintained in a set of top-k task-relevant nodes using the weight of each Re, which is updated with edge-wise adaption in Stage2.
Stage2:Edge-Wise Adaption and Symbolic Structuring Second, we need to find the causal effect for Pi → Si. Since the path Pi ← T ← D → Si contains a backdoor from Pi to Si, we cannot rely on the conditional probability. Instead, we intervene on Pi using do-operator to cut off D → T :
πi(Si|do(Pi = p)) = ∑ t,s πi(Si|p, T = t, Si−1 = s)πi(T = t, Si−1 = s)
= ∑ t,s πi(Si|p, T = t, Si−1 = s)πi(Si−1 = s|T = t)πi(T = t) (2)
The retrieved concept-centered graph has multiple edges representing various relationships with other actions/entities. Therefore, the summation over intervened T can be achieved by incorporating these edges into the prompt. For instance, “living room” can be “walked to” and “used for reading” while “book” can locate in “living room” and “bedroom”. Similarly, we extrapolate over the edges for i − 1 hops to aggregate the intervened Si, i.e. P (Si−1 = s|T = t). Directly ranking the retrieved nodes Ne with the annotated weight (Ew) in the external knowledge base will result in a spurious correlation. Because such retrieved local subgraphs tend to capture the task-invariant concept nodes as the causal factors. To mitigate this, we propose to adapt the weight of each triplet (Edge-wise Adaption). The adapted weight is the addition of the original edge weight and the cosine similarity between the tail node embedding nEtail of the edge Re and the task embedding vtask as: Êw ← Ew + cosine(nEtail ,vtask). The embeddings are projected from the node text and task name using the sentence-transformer (Reimers & Gurevych, 2019). The nodes Ne are finally retrieved by ranking the adapted weight Êw. To better track the utilized external knowledge during inference, we construct the task-dependent commonsense prompt with a Symbolic Executor (Symbolic Structuring) guided by the relation type of each triplet in Gs with the adapted edge weight beyond threshold θe. Specifically, the Symbolic Executor acquires the neural information of each natural language node and executes the sequential mapping program by sampling the operation Op from the Symbolic Rule Set R according to the edge relation type. The Symbolic Rule Set R is obtained by mapping the description of the relations (e.g., AtLocation represent ‘A is a typical location for B, or A is the inherent location of B. Some instances of this would be considered meronyms in WordNet.’) in the external knowledge graph (e.g., ConceptNet) to symbolic operations (e.g., Op AtLocation). For instance, the AtLocation edge samples the operation Op AtLocation from R, which takes the commonsense relation of the triplet from Gs as the parameters to query the procedural concept output given the natural language meaning of the linked nodes (e.g., go to the location of Start Node Of(re) in this case). Similarly, Op UsedFor may refer to ”go to find End Node Of(re) and use it for Start Node Of(re)”. And operators Op HasSubevent and Op HasPrerequisite will recursively navigate the subgraph Gs. After navigating the subgraph, we linearize the transformed triplets as the Procedural Prompt PG, which is then translated to Admissible Knowledge Prompt P̂G by the Translation Language Model LMT .
Stage3:Temporally-Extended Aggregation To acquire temporal order in the procedure, we obtain the Prompt P at timestep i with the aggregation of task T , history steps S0:i−1 and current external knowledge P̂G. The underlying causal mechanism is a combination of Eq. 1 and Eq. 2:
πi(Si|do(T ), do(Si−1)) = ∑ p πi(Si|do(Pi = p))πi(p|do(T ), do(Si−1))
= ∑ p πi(p|T ) ∑ t,s πi(Si|p, T = t, Si−1 = s)πi(T = t, Si−1 = s) (3)
The adjustment and marginalization in Eq. 3 is achieved in the input space by forming the Procedural Prompt PG that allows the LLM to attend on the causal entities instead of the highly correlated ones for the next step generation. The LLM can reason over the most relevant edges to link the concepts with the task entities as a context. The prompts from knowledge bases are independent of the pre-training data distribution so that Pi is independent of D and satisfies the front-door criterion. Please refer to Appendix A.3 and Figure 4 for the simplification of our structural causal model.
3.2 PROCEDURAL PLANNING WITH LARGE LANGUAGE MODELS
Stage4:Semantic Generation The external knowledge is further concatenated with the goal input (T ) as the initial prompt. Given the prompt, the language model Generation LMG ∈ {PAR, PAE} (e.g., GPT3, BART) generates the next sentence, and the most confident prediction is then appended to previous prompts. The Termination Condition is either reaching the max step t or the matching score is below threshold θ. The joint probabilities of auto-regressive (PAR) and auto-encoder (PAE) model is factorized as:
πAR(x) = n∏ i=1 p(sn|P̂G, s1:n−1, T ), πAE(x) = n∏ i=1 p(sn|P̂G, {s1:n−1, [MASK]}, T ) (4)
where P̂G represent the commonsense knowledge and T represent the task name.
Algorithm 1 Neuro-Symbolic Procedural Planning using Commonsense-Infused Prompting
Require:
Task Sample T , Admissible Step Set S, External Knowledge Graph G; Language Model for Generation LMG and Translation LMT , Symbolic Rule Set R;
Ensure: 1: [Stage1] Semantically parse T into entity set TE ; 2: Maintain top-k task-relevant nodes Ne in TE ; 3: Retrieve subgraph Gs ⊆ Ne ×Rs ×Ne from G ⊆ N ×R×N for each e ∈ TE ; 4: [Stage2] Edge-wise adaption as Êw ← Ew + cosine(nEtail , vtask) and re-rank Ne in TE ; 5: Map the description text of the relationsRs in Gs as Symbolic Rule Set R; 6: Construct procedural prompt PG by verbalizing the re-weighted Gs using R; 7: Translate PG in Admissible Knowledge Prompt P̂G = LMT (PG);
Temporally-extended zero-shot inference for Procedural Plan ST = {S1, ..., Si}: 8: for each timestep i do 9: [Stage3] Aggregate Prompt Pi ← [T ;S0:i−1; P̂G];
10: [Stage4] and [Stage5] Si = LMT (LMG(Pi)); 11: Update Procedural Plan ST ← Si; 12: end for
Stage5:Admissible Step Translation To ensure that the generated procedural plans are grounded to the environment, we should avoid producing the steps that are inadmissible (e.g. Toast the table). In other words, the generated steps should be fully constrained to the admissible composite of action and object in a certain task domain. Thus previous works (Huang et al., 2022; Ahn et al., 2022) have explored using the model (which is LMT in our case) to score a step selected from a fixed set of available options, instead of directly sampling from the output distributions of the language model (which is LMG in our case). Specifically, we match the generated step by LMG to the most similar admissible step in the embedding space encoded by the Translation Language Model LMT . Following (Huang et al., 2022), we utilize a Sentence-Transformer (Reimers & Gurevych, 2019) to calculate the cosine similarity as π(si|x) = LMT (LMG(x)), which translates LMG(x) into the admissible step si ∈ S̄ that is the closest in the embedding space measured by the cosine similarity.
3.3 COUNTERFACTUAL PROCEDURAL DATA CONSTRUCTION
To investigate the counterfactual reasoning ability, we design three families of intervention methods: 1) Initial Configuration: intervene in the initial configuration, such as the location for implementing the task. 2) Intermediate Step, randomly select one step from the ground truth program as an additional constraint of implementing the task and append it to the task name for generating the procedural plan. 3) Final Goal, intervene the task goal as the composite of another randomly sampled task. Table 5 in the Appendix summarizes the category and description. The counterfactual dataset construction details and post-intervention examples are provided in Appendix B.2.
4 EXPERIMENTS
4.1 PROCEDURAL PLANNING SETUP
Datasets We conduct zero-shot experiments on two datasets with procedural information, WikiHow (collected following (Koupaee & Wang, 2018)) and RobotHow (Puig et al., 2018) without training. WikiHow is a large-scale text summarization dataset that is constructed from a human-written knowledge base, involving procedural tasks that spans various topics. We utilize “how to” title as the task names and the summarized headlines as the steps. RobotHow is a large knowledge base of common household tasks collected in the VirtualHome (Puig et al., 2018) simulator. The dataset contains the programs with high-level task names and low-level steps. MT is composed of 292 and 2000 distinct tasks from RobotHow and WikiHow respectively. Human evaluations use randomly sampled 50 task examples for each dataset. Automatic evaluations use 150 and 1000 task examples randomly sampled from RobotHow and WikiHow respectively. Please refer to Appendix B.1 and Appendix B.2 for dataset details.
Baselines We compare our approach with three vanilla generative pre-trained language models (BART, GPT2, and GPT3) and two powerful generation baselines (Zero-shot Planner (Huang et al., 2022) noted as “LLMaP” and Chain of Thought (Wei et al., 2022) noted as “Chain”). More method and configuration details of the models can be found in Appendix B.3 and Appendix B.4.
Metrics We ask human annotators on the Amazon Mechanical Turk platform to rate model performance on two aspects: 1) Coverage: depicts which set of steps can better complete the target task (captures semantic completeness). 2) Order: depicts which sequence covers more steps that are necessary to complete the target task (captures sequential order correctness). In addition, we use Sentence-BLEU (S-BLEU) (Papineni et al., 2002), BERTScore (Zhang* et al., 2020), ROUGE1 (Lin, 2004) and Word Mover’s Distance (WMD) (Kusner et al., 2015) as automatic evaluation metrics. These metrics are used to compute the semantic scores between the annotated programs and the predictions. Details of the crowdsourcing human evaluation can be found in Appendix C.1.
4.2 HUMAN EVALUATION RESULTS WITH COVERAGE AND ORDER METRIC
Each example is rated by 3 crowdsourcing annotators. For the Win-Lose Comparison, we ask the human rater to choose between ours and the baseline LLMaP (Huang et al., 2022). Averaged results reported in Table 1 show that our PLAN is more frequently rated as better for both coverage and order metrics, outperforming baselines over the winning ratio by 21% in coverage and 26% in order, across two datasets. We report the average results of Human Ratings with 5-point Likert scale in Table 2. The consistent performance boost of PLAN indicates the superiority of injecting external commonsense knowledge into the procedural planning task. The performance drop of LLMaP and Chain in the counterfactual setting indicates the vulnerability of fixed holdout knowledge and the pre-defined manual exemplars in causal procedural planning. Please refer to Appendix C.1 for the crowdsourcing human evaluation interface details. Table 3 shows two examples for Qualitative Comparison. More examples can be found in Appendix D.
4.3 AUTOMATICALLY MEASURING THE PROCEDURAL PLANNING
Main Results Table 4 summarizes The automatic evaluation results. PLAN achieves the best results regardless of the architecture of the language model architecture, either autoregressive or autoencoder based. The performance gain of “LLMaP” over “Chain” may probably be due to direct exposure to the holdout task from the dataset. While the “Chain” baseline still outperforms the vanilla baseline that only takes the high-level task name as the prompt. Note that the annotated program is not the
only solution, thus these automatic metrics provide limited absolute performance information. Details for the correlation between automatic metrics and human evaluation can be found in Section 4.5.
Effects of Edge-wise Adaption and Symbolic Program Execution The variant “w/o Adaption” maintains the top-k task-specific nodes ranked by the annotated weight EW in the external knowledge base G without adaption. The variant “w/o Symbolic” directly takes the extracted concept nodes from external knowledge base as prompt. The performance drop of these two variants in Table 4 with significance test in Appendix C.2 demonstrate the importance of adaption and symbolic modules.
Effects of the Large Language Model Architecture We use GPT2 and GPT3 as autoregressive architecture and BART (Lewis et al., 2020) as autoencoder architecture. The autoregressive architecture achieves better results than the autoencoder one. Since the pre-training objective of autoregressivebased GPT is to predict the next token given the previous input tokens. We assume the performance gain of GPT is due to a smaller gap between the objective of pre-training and procedural planning.
Level of Complexity We show report results that use the test set which is separated into several buckets according to the number of steps in the procedural planning task. The step number reflects the difficulty of the task. In Table 7 and Table 8 in Appendix C.2, we show that the averaged performance gain of PLAN over the baselines are consistent or more significant in more complicated procedural planning settings. This indicates the superiority of PLAN in solving long-horizon tasks.
4.4 RESULTS ON COUNTERFACTUAL TASK SAMPLES
We apply Initial Configuration, Intermediate Step, Final Goal interventions on RobotHow and Intermediate Step on WikiHow. Human evaluations under counterfactual setting are summarized in Table 1 and Table 2. PLAN consistently outperforms baselines by a large margin and experiences
a much smaller performance drop compared with the powerful baselines when switching to the counterfactual setting. We assume it’s due to the biased knowledge of the holdout examples and manual exemplars utilized in the baselines, which are vulnerable to counterfactual samples. Automatic evaluations on counterfactual RobotHow are summarized in Table 13 in Appendix C.2. Aligned with human evaluations, PLAN achieves the best performance. The overall poor performance in Final Goal category indicates the challenge for long-horizon and composite procedural planning. While the overall better performance in Intermediate Step category benefits from the intermediate guidance.
4.5 CORRELATION BETWEEN AUTOMATIC AND HUMAN EVALUATION
We evaluate segment-level Pearson Correlation between human and automatic metrics. We observe that BERTScore has a moderate correlation to the human coverage score and WMD has a moderate correlation to the human order score, with 23.3% and 32.3% respectively. Similar to the prior findings (Xu et al., 2021), n-gram-based metrics (Sentence-BLEU and ROUGE) have a relatively weaker correlation to the human coverage score, with a Pearson correlation of 16.4% and 21.1%. Overall, our automatic and human evaluation scores are consistent with the main claim of this paper. However, human evaluation is still irreplaceable for procedural planning at the current stage.
5 RELATED WORK
Procedural Planning Learning to generate procedural plan (Zhang et al., 2020a; Lyu et al., 2021; Zhang et al., 2020b; Chang et al., 2020; Wu et al., 2022; Huang et al., 2022) is important for embodied agentTellex et al. (2011); Jansen (2020); Ahn et al. (2022) and conversational assistants (Ilievski et al., 2018; Yang et al., 2022). Previous work views procedural script learning as a structured form of commonsense knowledge Gupta et al. (2004); Regneri et al. (2010); Wanzare et al. (2016), while more recent work strengthens its association with the changing environments for executable action planning Puig et al. (2018); Shridhar et al. (2020). Some works (Sun et al., 2020; Zhao et al., 2021) explore to utilize human written programs to precisely specify tasks. Our method tackles the problem with aware of cause-effect by utilizing commonsense-infused prompts via a neuro-symbolic approach (Mao et al., 2019; Nye et al., 2021; Yi et al., 2018) for zero-shot procedural planning.
Causality for Language Generation The integration of causality and machine learning has been an intriguing topic for many problems Pearl (2009); Schölkopf (2022). Previous studies focusing on causal inference for natural language understanding Chen et al. (2020); Keith et al. (2020); WoodDoughty et al. (2018) and generating counterfactual text representations Feder et al. (2021). Weber et al. (2020) proposes an intervention method for script learning. However, these methods cannot be directly applied to procedural planning which requires a formal structure. Our method is based on mediation analysis VanderWeele (2015) and causal intervention Pearl (2009); Peters et al. (2017).
Prompt for Large Language Model There is an emerging interest in using prompts to extract knowledge from large language models (Chen et al., 2022; Le Scao & Rush, 2021; Su et al., 2022; Ye et al., 2022; Zhou et al., 2022; Kojima et al., 2022). Cao et al. (2022) treats the prompt as a cause of the task-specific predictor and investigates biases in prompt-based probing evaluations. Chain of thought Wei et al. (2022) discovers that LLM can perform better on reasoning tasks when the prompt is designed as a series of short sentences that mimic the reasoning process of humans.
6 CONCLUSION AND FUTURE WORK
Procedural planning is a newly emerged research area of great importance to various applications, such as household robots and virtual assistants. We propose a neuro-symbolic procedural PLANner (PLAN) with commonsense-infused prompts elicited from the external knowledge base to solve the procedural planning problem in a zero-shot manner without human annotated exemplars. Experiments show the effectiveness of our proposed PLAN under both origin and counterfactual settings, indicating the capability of mitigating spurious correlation by injecting external knowledge in LLMs. Though, procedural planning over long-horizon and composite tasks remains challenging. And exploring multimodal learning and developing human-aligned evaluation metrics are promising future directions in this area.
7 ETHICAL STATEMENT
Given the limited diversified cultural background of the dataset we are using from RobotHow and WikiHow, we assume our results may be biased toward a single cultural background. For instance, given the task ”make breakfeast”, it should take multi-culture into consideration to generate the procedural plans.
8 REPRODUCIBILITY STATEMENT
We provide more data samples and qualitative samples in supplemental materials. In addition, we provide our code implementation at https://anonymous.4open.science/r/PLANNER-7B24 to reproduce our experiments. The Preprocess folder provides the utils to construct the data. The Evaluation folder provides the code for automatic and human evaluation tools. The Planning folder contains the main code for our approach and reproduced planners for procedural planning. The Visualization folder provides the code we use to visualize in the environment.
ACKNOWLEDGMENTS
The research was sponsored by the U.S. Army Research Office and was accomplished under Contract Number W911NF-19-D-0001 for the Institute for Collaborative Biotechnologies. This work was also supported by the National Science Foundation award #2048122. We thank the Robert N.Noyce Trust for their generous gift to the University of California via the Noyce initiative. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
Appendix
Table of Contents
A SCM Theoretical Details 16
A.1 Causal Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 A.2 The Backdoor Adjustment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 A.3 The Front-door Adjustment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
B Implementation Details 19 B.1 Original Dataset Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 B.2 Counterfactual Dataset and Experiment Details . . . . . . . . . . . . . . . . . . 20 B.3 Method Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 B.4 Hyperparameter Search and Configuration Deicision . . . . . . . . . . . . . . . 21 B.5 Computation and Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
C Evaluation Details 21
C.1 Crowdsourcing Human Evaluation . . . . . . . . . . . . . . . . . . . . . . . . 21 C.2 More Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
D Qualitative Examples 29
D.1 Intermediate Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 D.2 Predicted Procedural Plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
E Discussion 34
E.1 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 E.2 Failure Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 E.3 Ethical considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
A SCM THEORETICAL DETAILS
A.1 CAUSAL PRELIMINARIES
The Structural Causal Model (SCM) is a directed acyclic graph (DAG) to describe the causal relationships within a system Pearl (2009). In this paper, we refer to the unrolled SCM along the time dimension as the full temporal causal graph, while the rolled-up version is also called the causal summary graph Peters et al. (2017). In an SCM, if the variable D is a cause of both T and Si, then it is called a confounder. A confounder opens up a backdoor path and causes a spurious correlation between T and Si. The backdoor path is defined as the remaining path between T and Si when all the arrows pointing out of T are removed. Therefore, T ← D → Si is a backdoor path. For our SCM with mediator Pi shown in Figure 4c (same as Figure 2b) from the main paper, there is no backdoor path between T and {Pi, Si−1} because only D → T is left after removing outgoing arrows of T . On the other hand, there is a backdoor path between Pi and Si, i.e. Pi ← T ← D → Si so that Pi indirectly affects the observation of Si through {T, Si−1} and D. The mediator is the variable added between treatment variable (the cause T and Si−1 in our case) and treatment variable (the effect Si in our case), and thus blocks all directed path from the cause to effect ( (Zhang et al., 2016)). The spurious correlations happens when two variables are statistically related but not causally related because of a third variable influences these two variables at the same time or the correlation is coincidental.
To identify the true causal effect between X and Y , we aim to estimate the conditional π(Y |do(X)) after intervention with the do-operator. The do-operator is to break the backdoor path by setting X to a fixed value independent of Z. Then the path Z → X can be removed to eliminate the backdoor paths. In practice, the backdoor adjustment and front-door adjustment are two fundamental methods to implement interventions and obtain the conditional π(Y |do(X)). Clarity of the Definition As a language prompt, Pi inherits the content from Pi−1 and thus can be detached from steps before Si−1 for simplicity.
Causal Intervention There are two types of operation to control the confounding bias: the backdoor adjustment and the front-door adjustment (Pearl, 2009). The backdoor adjustment is intractable in our case because it requires the prior distribution of the confounding variables. On the other hand, we can construct an input prompt as a mediator Pi for T → Si and Si−1 → Si. Then the front-door adjustment applies a two-step do-operation to mitigate bias by investigating P → Si (Pearl, 2009). Specifically, we construct the prompt mediator Pi using techniques illustrated in Section 2.2.
The pre-trained knowledge (D) in LLMs confounds language models to make biased decisions toward an unreasonable action. Since the confounder is unobservable, intervention techniques such as back-door (definition in Appendix A.2) adjustment (Hu & Li, 2021; Weber et al., 2020; Yue et al., 2020) are not applicable in our SCM. Instead, we build a mediator and implement it as a commonsense-infused prompt. Through the mediator, we can identify causal effects among goals and steps by investigating the indirect effect from the goals, which is essentially the front-door adjustment (definition in Appendix A.3) in causality (Pearl, 2009).
A.2 THE BACKDOOR ADJUSTMENT
The backdoor adjustment is one way to realize the intervention do(T = t) by considering the conditional probability over the existing data distribution with observed confounder D. Let πi denote π(·|Pi−1) that represent the probability density function conditioned on Pi−1. It calculates the average causal effects by considering all stratums of the dataset:
πi(Si|do(T )) = ∑ d πi(Si|T,D = d)πi(D = d) (5)
However, for LLMs, the pretraining data is usually unobservable and has been transformed as knowledge incorporated into the hidden space. Therefore, we are not able to directly apply the backdoor adjustment.
A.3 THE FRONT-DOOR ADJUSTMENT
The front-door adjustment is another technique to apply intervention by introducing a mediator Pi when the confounder is unobservable. As is explained in Section 2.2 from the main paper, the front-door adjustment is equivalent to two consecutive do-operations on task T and prompt Pi. We first investigate the generation of S1 and then expand it to St.
Timestep i = 1 As is shown in Figure 4a, since there is no preceding steps, the first step generation involves D, T and P1 only. Similar to the proof in Section 2.2 from the main paper, we have:
πi(S1|do(T )) = ∑ p πi(S1|do(P1 = p))πi(p|do(T ))
= ∑ p πi(p|T ) ∑ t πi(Si|p, T = t)πi(T = t) (6)
By adding intervention to T , we make the value of do(T = t) independent of the confounder D at the beginning. The backdoor path through D → T is eliminated as a result.
Timestep i > 1 As is shown in Figure 2a from the main paper, we model the mediator P1 as an effect of three variables, T , Pi−1 and Si−1. The first step of our front-door adjustment is to apply the do-operator on the three variables and observe the change in Pi as explained in Section 2.2 from the main paper. Since there are no backdoor paths between Pi and these variables, we have the probability after intervention equal to the conditional probability without intervention:
πi(Pi = p|do(T )) = πi(Pi = p|T ) (7) πi(Pi = p|do(Pi−1)) = πi(Pi = p|Pi−1) (8) πi(Pi = p|do(Si−1)) = πi(Pi = p|Si−1) (9)
The second step is to apply do-operator on Pi and then identify the causal effect as: πi(Si|do(Pi)) = ∑ t,p′,s ( πi(Si|Pi, T = t, Pi−1 = p′, Si−1 = s)
πi(T = t, Pi−1 = p ′, Si−1 = s) ) (10) Combining Equation7-9 and Equation 10, we have the front-door adjustment. Note that there are three backdoor paths from each of the variables T , Pi−1, and Si−1, as is shown in Figure 4b (drawn
in blue, red and purple). More importantly, the one through T , i.e. Pi ← T ← D → Si (the blue path in Figure 4b) and the one through Pi−1, i.e. Pi ← Pi−1 ← T ← D → Si (the red path in Figure 4b) shares the same subpath. The intervention on the task T breaks the backdoor paths for both T and Pi−1. Therefore, we have our front-door adjustment as
πi(Si|do(Si−1),do(Pi−1), do(T )) (11) = ∑ p πi(Si|do(Pi = p))πi(p|do(Si−1), do(Pi−1), do(T )) (12)
= ∑ p πi(Si|do(Pi = p))πi(p|do(Si−1), Pi−1, do(T )) (13)
= ∑ p πi(Si|do(Pi = p))πi(p|do(Si−1), do(T )) (14)
= ∑ p πi(p|Si−1, T ) ∑ s,t πi(Si|p, Si−1 = s, T = t)πi(Si−1 = s, T = t) (15)
= πi(Si|do(Si−1), do(T )) (16)
We have Equation 13 because of the intervention on T and Rule 2 (Pearl, 1995), Equation 14 because of Rule 1 (Pearl, 1995). After simplification based on Equation 12-16, we get the SCM at timestep i > 1 in Figure 4c. This is an equivalent SCM after eliminating Pi−1 in Figure 4b. The reason we could eliminate Pi−1 is as follows. We follow a common method of constructing temporally-extended prompt, which is to append the prediction at previous timesteps to the prompt at current timestep. In our case, the PG,i is the same as PG,i−1, thus Pi inherit part of the content from Pi−1, the change only depend on the Si−1. Thus Pi−1 and Si−2 are fixed, and there is no need to predict Pi−1 at timestep i again. In this way, we simplify the causal graph in Figure 4b to the one in Figure 4c. In summary, we define and simplify the causal graph based on the temporal-extended property of our prompt construction (Pi inherit the content from Pi−1). We end up with Equation 14-16 which is shown as Equation 3 in Section 2.2 from the main paper.
B IMPLEMENTATION DETAILS
B.1 ORIGINAL DATASET DETAILS
RobotHow This dataset is Attribution-NonCommercial-ShareAlike 4.0 International Creative Commons License. We evaluate the inference of 150 tasks by random selection from the dataset. Each program contains the task name, task description and steps. We use the task name and sequence of steps as our input and output references.Each step is a composition of [Action], [Object] and [Number]. For example, the sequence of steps of the task ”Watch TV” are: 1. [Walk] <TELEVISION> (1) 2. [SwitchOn] <TELEVISION> (1) 3. [Walk] <SOFA> (1) 4. [Sit] <SOFA> (1) 5. [Watch] <TELEVISION> (1).
WikiHow This dataset2 is under an Attribution-Noncommercial-Share Alike 3.0 Creative Commons License. And the text content is free to modify, republish and share. We evaluate the inference of 1000 tasks by random selection from the dataset. The admissible action space and interaction object space are more complex than the programs in RobotHow. And there is no fixed ”[Action] ¡Object¿ (Number)” form of each step. For each article, it contains the title, the bold headlines and text. We utilize the title and headlines as our task name and steps respectively.
External Knowledge Base For the external knowledge base, we utilize ConceptNet to leverage commonsense reasoning ability to help ground language generation in goal-guided procedural text generation. ConceptNet (Speer et al., 2017) captures commonsense knowledge explicitly with triplets of (head node, relation, end node). It contains 799, 273 nodes and 2, 487, 810 edges that represent both symmetric and asymmetric relations. Specifically, the core relations we utilized are Synonym, AtLocation, CapableOf, Causes, CausesDesire, HasPrerequisite, HasSubevent, and UsedFor. Since we are looking at the commonsense knowledge in house-holding tasks, so we filter out the relations (/r/DistinctFrom, /r/DerivedFrom, /r/SymbolOf, /r/EtymologicallyRelatedTo, /r/EtymologicallyDerivedFrom) that are related to the linguistic.
2https://www.wikihow.com
B.2 COUNTERFACTUAL DATASET AND EXPERIMENT DETAILS
Table 6 show the examples that compare the original program and the counterfactual program of each intervention method are also provided. Specifically, for Initial Configuration, we randomly append the location to a given task name to constrain the location of completing the task. The steps are prepended with the initial step ”walk to ¡Location¿”. For Intermediate Step, we randomly sampled a step from the task-specific program and append it to the task name to constrain the way to implement a given task. For Final Goal, we randomly combine two tasks by combining both the task names and the programs to construct a set of long-horizon composite tasks.
We conduct counterfactual experiments by applying randomly selected intervention methods over RobotHow. And we only apply the Intermediate Step intervention method over WikiHow due to the loose configuration requirement and the long text of the WikiHow contents. Note that the performance gain of PLAN under the counterfactual setting mainly comes from the additional guidance of the task introduced from the Intermediate Step intervention method. However, the baselines mostly experience performance drops due to the limited annotated exemplars. PLAN consistently outperforms baselines by a large margin, indicating its superiority under the counterfactual setting.
B.3 METHOD DETAILS
The existing formalization of the procedural planning task can be mainly categorized as 1) sequential choice making (Lyu et al., 2021; Wu et al., 2022; Zhang et al., 2020a;b), which reasons about the next step from the options given, the task, and previous steps; 2) conditioned generation (Huang et al., 2022; Ahn et al., 2022), which generates the temporally extended plans to implement the task. We study the procedural planning task as the conditioned generation problem (Huang et al., 2022; Ahn et al., 2022) since it resembles real-world scenarios.
Baselines LLMaP propose a procedure to extract temporally extended plans from large pre-trained language models. Chain explores manually creating exemplars that mimic the reasoning process
and uses them to prompt large language models for reasoning tasks. To compare with Chain on the procedural planning task, we manually generate exemplars that contain the chain of thought for 1% of the inference task programs. Note that for the BART language model, we use BART-large version. And we use the 1.5 billion parameter GPT-2 (aka gpt2-xl). For the translation model LMT , we use sentence-transformers (RoBERTa-large). All these models are released by HuggingFace. In addition, our experiments with GPT3 (davinci) use OpenAI API (May, 2022).
External Knowledge Graph Conceptnet5 define a set of 34 relations (3). Within the relations we consider in the procedural planning task, the averaged sampling time of subgraph sampling is 0.03576 milliseconds per task program.
B.4 HYPERPARAMETER SEARCH AND CONFIGURATION DEICISION
We perform a hyperparameter search for all evaluated methods for the following hyperparameters.
• The confidence threshold θ, which terminate the generation when below it, is searched in {0, 0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8}.
• The steps horizon, which constrains the maximal number of procedural planning steps, is searched in {10, 20, 40}.
• The number of hops for retrieving the subgraph from the external knowledge base is searched in {1, 2, 3}.
• The ratio of maximal concepts to the length of the task name is searched in {1, 2, 3}. • The cosine similarity threshold for keeping the task-specific concept is searched in {0.4, 0.6, 0.8}.
• The edge weight threshold θe is searched in {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8}. • The top-k task-specific nodes value is searched in {1, 5, 10, 15, 20, 25, 50, 100}.
The configurations used in the experiments are: θ=0.7, 20 step horizon, 3 hops, 3 ratio of concepts to task length, cosine similarity threshold 0.4, θe=0.6 and k=10.
We empirically choose the hop number H as 3 considering both the input length limit of the LLMs and the fact that 3-hop contains reasonable relevant information in practice (Zhang et al., 2022).
B.5 COMPUTATION AND RESOURCES
We use one single NVIDIA A100 GPU Server for all the experiments. Since there is no training in our zero-shot settings, the computation is only used for the inference stage of the experiments.
C EVALUATION DETAILS
C.1 CROWDSOURCING HUMAN EVALUATION
We conduct all the human evaluations (rating and win-lose comparison) on Amazon Mechanical Turk platform. Each example is rated by 3 annotators. We ask Amazon Mechanical Turk workers, for every assignment, to evaluate the quality of the provided low-level steps given the high-level task description. For the Win-Lose Comparison, they were asked to choose one from the two provided model generated results by 1:the first one is better, 2:equal and 3:the second one is better. For the Human Ratings, they were asked to score each sample with 5-point Likert scale. This process does not involve collecting any personal information. And we manually check no offensive content is produced by the models.
The assignment layout templates for workers are shown in Figure 7 and Figure 6. Specifically, we evaluate randomly selected 50 task examples from each dataset (RobotHow and WikiHow) under all the settings (standard and counterfactual). We only collect the examples that the workers read the instructions carefully by checking whether they give 1 score for the empty program as a sanity check. The hourly wage paid to participants is estimated $9. And the total amount spent on participant
3https://github.com/commonsense/conceptnet5/wiki/Relations
compensation is $1296. The details of the Human Intelligence Tasks process are described in the following sections.
C.1.1 WIN-LOSE COMPARISON
During the process of Human Intelligence Tasks, the workers are shown the following instructions: Read the given task and the sequence of steps, determine which set of steps can better complete the target task. In other words, can the task be decomposed into these steps? Please consider the sequential order of the steps.
Then the program to be evaluated is provided as:
Question Task: Study
Sequence 1:: Step 1: Walk to textbook Step 2: Read book Step 3: Walk to book
Sequence 2:: Step 1: Walk to home office Step 2: Find desk
Finally, the workers are asked to score the program by following the instructions below: Select an option: 1 - Sequence 1 is better; 2 - Tie; 3 - Sequence 2 is better
The above example is to evaluate the order metric, for the coverage metric, the same process are conducted, except for the instructions are: Read the given task and the sequence of steps, and determine which sequence covers more steps that are necessary to complete the target task. Please ignore the sequential order of the steps.
C.1.2 HUMAN RATINGS
Similar as the Win-Lose Comparison Human Intelligence Tasks, the workers are shown the following instructions: For every question below, determine whether the task can be completed in any reasonable scenario using the provided steps (Please consider the sequential order of the steps.). You could directly give the lowest score (1) for the empty steps. In other words, can the task be decomposed into these steps? (Please consider the sequential order of the steps.)
Then the program to be evaluated is provided as:
Question Task: Write an email
Sequence of Steps: Step 1: Walk to home office Step 2: Walk to computer Step 3: Find computer Step 4: Turn to computer Step 5: Look at computer Step 6: Walk to computer Step 7: Find chair Step 8: Sit on chair Step 9: Find keyboard Step 10: Grab keyboard Step 11: Find mouse Step 12: Grab mouse Step 13: Type on keyboard
Finally, the workers are asked to score the program by following the instructions below: Use the slider below to indicate how much you agree with the following statement (1 = Strongly disagree, 5 = Strongly agree). If ”sequence of steps” are blank, please directly choose 1 (lowest score). The task can be completed in any reasonable scenario using the provided steps. [SLIDER PROVIDED
HERE]
The above example is to evaluate the order metric, for the coverage metric, the same process is conducted, except for the instructions are: For every question below, determine whether the task can be completed in any reasonable scenario using the provided steps (Please ignore the sequential order of the steps.). You could directly give the lowest score (1) for the empty steps. In other words, can the task be decomposed into these steps? (Please ignore the sequential order of the steps.)
C.2 MORE RESULTS
Significance Test We provide paired-t test (p¡0.05) statistics results for Table 2. On RobotHow, our PLAN significantly outperforms all baselines on Original-Order(BART) and CounterfactualCoverage(GPT2). On WikiHow, our PLAN significantly outperforms all baselines on OriginalCoverage(BART, GPT2), Counterfactual-Coverage(BART, GPT2), and Counterfactual-Order(BART). For the coverage metric under the counterfactual setting, the human-provided program is not significantly better than our PLAN.
We also conduct the paired-t test (p¡0.05) statistics results over the variant “w/o Adaption” and “w/o Symbolic”. Compared with the full model PLAN, the variants experienced a statistically significant
performance drop. Especially on BERTScore-f1, the p-value is 8.884e−13 and 1.4e−8 respectively. This further confirms the importance of the modules.
Results on GPT-3 In addition, we conduct experiments with GPT-3 (davinci version) using OpenAI API. We showcase the comparison in Table 9 and Table 10.
Motivation of Evaluation Metrics Since the nature of the procedural planning task can be opendomain in that the golden plans may not be unique. This leads to the challenge that common automatic metrics proposed in natural language task are not perfect to evaluate procedural planning. The same observations of such challenge to directly judge the system using automatic metrics are discussed in LLMaP(Huang et al., 2022) as well. We assume that the human evaluation on Coverage and Order can reflect how well the procedural plans are close to human annotated program, because the human annotators are required to determine whether the task can be completed in any reasonable scenario using the procedural plans explicitly. Thus we provide both the automatic evaluation and human evaluation on two aspects Coverage and Order, with description in the Metrics paragraph in Section 4.1.
Evaluation on Success Rate Metric To make human evaluations more intuitive, we provide an additional Success Rate metric to show whether the procedural plans can successfully implement the task, which focus more on the success rate instead of the coverage or the order of the plans. We show the Success Rate evaluations on the baselines and our method in Table 11. The assignment layout template for workers is shown in Figure 8.
More Ablation To verify the contribution of the first translation language model LMT that translates the knowledge prompt PG into admissible one P̂G, we conduct an additional ablation experiment by simply removing the first LMT and replacing P̂G with PG to prompt the LLM for procedural planning. We provide results with comparisons to other ablations in Table 12.
Results on Counterfactual Task Samples We show automatic evaluation results on counterfactual RobotHow in Table 13.
D QUALITATIVE EXAMPLES
D.1 INTERMEDIATE OUTPUT
We provide running examples with intermediate output for each module in the following paragraph. First, we show the intermediate output of input task T , the subgraph Gs depicted in the tuple of the start node, relation type, tail node and edge weight, the knowledge prompt PG and the translated one P̂G as below:
• Input task T : Take shower.
• Human-annotated Plan Reference: Step 1: Walk to bathroom. Step 2: Walk to clothes dress. Step 3: Find clothes dress. Step 4: Put off clothes dress. Step 5: Find shower. Step 6: Enter shower. Step 7: Find soap. Step 8: Grab soap. Step 9: Scrub soap. Step 10: Put back soap. Step 11: Leave shower. Step 12: Find towel. Step 13: Grab towel. Step 14: Wipe towel. Step 15: Find clothes dress. Step 16: Put on clothes dress.
• Task-relevant subgraph Gs(Nhead, Re, Ntail, Ew): (take a shower, HasLastSubevent, dry off, 6.0); (bathe, HasLastSubevent, dry off, 6.0); (take a shower, HasPrerequisite, take out your clothes, 4.47); (take a shower, HasSubevent, get clean, 4.47); (take a shower, HasPrerequisite, take your clothes off, 3.46); (go to a party, HasPrerequisite, take a shower, 2.82); (play lacrosse, HasLastSubevent, take a shower, 2.82); (get clean, HasPrerequisite, take a shower, 2.82); (take a shower, MotivatedByGoal, wash your hair, 2.82); (play sports, HasLastSubevent, take a shower, 2.82); (go to the hairdresser, HasPrerequisite, take a shower, 2.82); (take a shower, HasPrerequisite, turn on the water, 2.0); (have a bath, HasLastSubevent, dry off, 2.0); (get wet, HasSubevent, dry off, 2.0); (become more clean, HasLastSubevent, dry off, 2.0); (take a shower, HasSubevent, wash your hair, 2.0); (take a shower, HasLastSubevent, turn off the water, 2.0); (become more clean, HasLastSubevent, dry off, 2.0); take a shower, HasLastSubevent, put your clothes on, 1.0); (take a shower, HasSubevent, use shampoo, 1.0); (take a shower, HasSubevent, wash behind your ears, 1.0); (take a shower, HasSubevent, wash your body, 1.0); (take a shower, HasPrerequisite, go to the bathroom, 1.0); (take a shower, HasPrerequisite, go to the bathroom and undress, 1.0); (take a shower, HasPrerequisite, step into the shower, 1.0); (take a shower, HasPrerequisite, soap up, 1.0); (take a shower, HasPrerequisite, bring some soap, 1.0); (take a shower, HasPrerequisite, bring some shampoo, 1.0); (take a shower, HasLastSubevent, towel yourself off, 1.0); (take a shower, HasPrerequisite, remember to take your towel, 1.0);
• Knowledge prompt PG: Step: take out your clothes. Step: set clean clothes. Step: go to the bathroom. Step: go to the bathroom and undress. Step: take your clothes off. Step: turn on the water. Step: bring some soap. Step: bring some shampoo. Step: remember to take your towel. Step: get clean. Step: wash your hair. Step: use shampoo. Step: wash behind your ears. Step: wash your body. Step: turn off the water. Step: dry off. Step: become more clean. Step: put your clothes on.
• Translated knowledge prompt P̂G: Step: find clothes underwear. Step: wash shower. Step: walk to bathroom. Step: look at shower. Step: take off clothes underwear. Step: wash shower. Step: find soap. Step: grab soap. Step: scrub shower. Step: find shampoo. Step: grab shampoo. Step: find towel. Step: wipe shower. Step: wash hair. Step: wipe shampoo. Step: scrub shower. Step: wash body. Step: switch off washing machine. Step: scrub shower. Step: wipe shower. Step: put on clothes underwear.
• Generated plan ST : Step 1: Find clothes underwear. Step 2: Walk to Bathroom. Step 3: Take off clothes shirt. Step 4: Scrub shower. Step 5: Pour shampoo into hair. Step 6: Wipe shampoo. Step 7: Wipe hair. Step 8: Wash body. Step 9: Find Towel. Step 10: Put on clothes underwear.
D.2 PREDICTED PROCEDURAL PLANS
More qualitative examples of final predicted procedural plans that are randomly selected are provided. Table 14 show random samples on the original dataset. Table 15 show random samples on the counterfactual datasets with the Intermediate Step intervention method. And Table 16 shows random
samples on the counterfactual RobotHow with the Initial Configuration and Final Goal intervention methods.
Model Program RobotHow Task: Play Games
Human Step 1: Walk to bedroom.Step 2: Walk to video game console. Step 3: Find video game console.Step 4: Walk to video game console. Step 5: Switch on video game console.Step 6: Find television. Step 7: Switch on television.Step 8: Find video game controller. Step 9: Grab video game controller.Step 10: Find chair. Step 11: Walk to chair.Step 12: Sit on chair
Chain Step 1: Put chef knife on water glass.Step 2: Find sink.
LLMaP Step 1: Walk to bedroom.Step 2: Walk to video game console.Step 3: Find video game console
Model Program RobotHow Initial Configuration Task: Play games in dining room.
Human Step 1: Walk to dining room.Step 2: Walk to video game console. Step 3: Find video game console.Step 4: Walk to video game console. Step 5: Switch on video game console.Step 6: Find television. Step 7: Switch on television.Step 8: Find video game controller. Step 9: Grab video game controller.Step 10: Find chair. Step 11: Walk to chair.Step 12: Sit on chairTask: Play games in dining room. Step 1: Walk to video game controller.Step 2: Put video game controller on diningtable. Step 3: Put boardgame on kitchen table.Step 4: Put boardgame on diningtable.
Chain Step 1: Walk to dining room.Step 2: Walk to sauce pan. Step 3: Pour bottle water into dish bowl.Step 4: Walk to water. Step 5: Walk to carrot.Step 6: Walk to food salt.
E DISCUSSION
E.1 LIMITATIONS
Though pointing out a direction to prompt out actionable knowledge in large-scale pre-trained language models with external commonsense knowledge, the limitations of reasoning long-horizon procedural plan still exist. Existing datasets for procedural planning like WikiHow and RobotHow are all monolingual supporting only English goals and plans. In the future, it is important to expand these datasets or having novel datasets that support multiple languages used across the world. The inherent difference between these languages may also result in different planning strategies in granularity or abstraction levels, which is potentially challenging. In addition, the long-horizon and complex composite tasks still remain challenging for the existing procedural planners.
Above limitations are discussed mainly based on the challenges of procedural planning task. In addition, there are limitations of our implementation that are guided by our causal analysis. First, the coverage of the leveraged external resources is limited, which is common in a knowledge-enhanced system. This may result in the wrong understanding of the task and produce not reasonable procedural plans. For example, the knowledge of the word ”Turking”, which refers to ”The act or process of performing small tasks using the Amazon Mechanical Turk service.” according to Wiktionary, is not covered in the external resources (e.g., ConceptNet). Since our proposed system does not assume specific external resources. It is plausible in the future if we utilize more powerful external resources (e.g., Wiktionary). Second, the hop number and the threshold of the multi-hop retrieval in taskrelevant subgraph sampling is currently a configured hyperparameter. This may result in not ideally constructed prompt. The future work could instead make these hyperparameters learnable on each task domain, and also explore the pros and cons between end-to-end commonsense-infused prompt versus neuro-symbolic constructed prompt.
E.2 FAILURE ANALYSIS
We discuss detailed failure modes and examples with analyses below. For example, the predicted procedural plan on task ”Turking”, which refers to ”The act or process of performing small tasks using the Amazon Mechanical Turk service.” according to Wiktionary. We compare the predicted procedural plan on this task among baselines and our method: (1) The ground truth plan is ”Task: Turking. Step 1: Walk to home office.Step 2: Walk to desk.Step 3: Find chair.Step 4: Sit on chair.Step 5: Find computer.Step 6: Switch on computer” (2) The plan predicted by Chain baseline is empty. (3) The plan predicted by LLMaP baseline is ”Task: Turking. Step 1: Put teddybear on oven.” (4) Our prediction is ”Task: Turking. Step 1: Eat food turkey. Step 2: Drink water. Step 3: Sleep.” We can see that for the ”out-of-knowledge” task, our method also lead failure planning. We assume this is mainly due to the limited knowledge in external resources, as discussed in the Appendix E.1, and this main failure mode can be avoided by introducing larger external resources (e.g, Wiktionary), similar as other knowledge-enriched methods.
E.3 ETHICAL CONSIDERATIONS
We hope to de-bias the procedural planning to avoid misleading either humans or robots with daily life instructions, which may result in unsafe situations. The cultural bias behind these datasets can be a critical issue for future work. As the ground truth planning steps usually reflect the culture shared by the English-speaking group, other cultures may have a completely different practical consideration that leads to different orders of these steps or even novel steps that are not proposed by the LLMs we utilized in this paper. In the future, we will consider cultural bias as a proxy variable so that we could adjust the implicit knowledge from LLM or commonsense from external sources according to the different needs of cultural backgrounds. | 1. What is the main contribution of the paper regarding text generation using large language models?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its principled motivation, technical solutions, and ablation study?
3. Do you have any concerns or questions regarding the paper's causal analysis, specifically with regards to the updated causal models, P_i-1 being copied into P_i, and the simplification of the graph?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content, especially in relation to the tool used for causal analysis?
5. Are there any limitations or potential breakpoints in the method that should be discussed in more detail? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper presents a principled algorithm for incorporating symbolic information related to text generation using an LLM. The idea is justified using a causal analysis, so the algorithm is motivated by the front door criteria. The external information is extracted by entity extraction and extracting related information from ConceptNet. The results are concatenated to the prompt and interpreted as conditioning. The human evaluation and metrics indicate the method leads to improvement in three different LLMs.
Strengths And Weaknesses
Strengths
Principled motivation using causality
Concrete technical solutions challenges for integrating with the selected source of information
Ablation study shows the impact is not trivial.
Idea is novel as far as I know.
For instance, this survey appeared after the submission. It does not mention work as specific as this submission: Feder, Amir, Katherine A. Keith, Emaad Manzoor, Reid Pryzant, Dhanya Sridhar, Zach Wood-Doughty, Jacob Eisenstein, et al. “Causal Inference in Natural Language Processing: Estimation, Prediction, Interpretation and Beyond.” Transactions of the Association for Computational Linguistics 10 (October 2022): 1138–58. https://doi.org/10.1162/tacl_a_00511.
Weaknesses
It's not clear which is the current causal model used at each point. (See question below)
The paper might be assuming P_i is the only path, but the existence of other paths to be blocked it's not discussed.
The argument loses a bit in the adaptation plus other decisions on what to retrieve.
Clarity, Quality, Novelty And Reproducibility
My main concern is that the causal analysis is both very thorough but it's also confusing. After trying the causal models in the tool https://causalfusion.net/app, I'm obtaining different estimands. The reason seems to be that the model in Fig 2 is updated after each iteration, fixing the values of the previous variables. That makes it hard to understand some of the statements. For instance, eq (1 and 10) says P(Pi=p | do(S_{i-1})) = P(Pi=p | do(S_{i-1})) However, using that tool I obtain something like P(P_3 ∣ do(S_2))=∑_{P_2,T} P(P_3∣S_2,P_2,T)P(P_2,T) P(P_2 ∣ do(S_1))=∑_{P_2,T} P(P_3∣S_2,P_2,T)P(P_2,T) but if I fix T and P_{i-1}, that is what would happen in greedy decoding with an LLM, then I should obtain P(Pi=p | do(S_{i-1})).
Something similar happens with Eq (9)
However, this makes the analysis hard to follow as the new causal models are not referred in the equations. There is also a comment on P_{i-1} being copied into P_i that makes things more complicated.
Question:
Am I right about these concerns?
Let's keep in mind that the algorithm is just an implementation. In principle, what we want is the prediction of all the do() compounded.
Perhaps the appendix is the place to clarify this point, explaining:
what's exactly the new causal graph after each iteration.
why does it make sense to simplify the graph.
Clarify which causal graph is related to each equation.
More questions:
does the causal analysis holds given de "adaption" that was necessary? I'd like to see a more clear causal criticism of that situation.
Does the use of the front criterion holds given that there could exist other words related to the task?
Where would this break? The results are just positive but there is not a detailed discussion on limitations.
I think the work is interesting and solid. My only concern is the one I just mentioned on the clarity of the causal analysis. |
ICLR | Title
Neuro-Symbolic Procedural Planning with Commonsense Prompting
Abstract
Procedural planning aims to implement complex high-level goals by decomposition into simpler low-level steps. Although procedural planning is a basic skill set for humans in daily life, it remains a challenge for large language models (LLMs) that lack a deep understanding of the cause-effect relations in procedures. Previous methods require manual exemplars to acquire procedural knowledge from LLMs in the zero-shot setting. However, such elicited pre-trained knowledge in LLMs induces spurious correlations between goals and steps, impairing the model’s generalization to unseen tasks. In contrast, this paper proposes a neuro-symbolic procedural PLANner (PLAN) that elicits procedural knowledge from the LLMs with commonsense-infused prompting. To mitigate spurious goal-step correlations, we use symbolic program executors on the latent procedural representations to formalize prompts from external knowledge bases as a causal intervention toward the Structural Causal Model of procedural planning. Both automatic and human evaluations on WikiHow and RobotHow show the superiority of PLAN on procedural planning without further training or manual exemplars.1
1 INTRODUCTION
How to make a cup of coffee? As humans, we can easily specify a procedure to solve this task, using our innate ability of commonsense reasoning. However, can we endow machines with the same ability to construct a sequential plan? As depicted in Figure 1, procedural planning (Pearson, 1996; Zhang et al., 2020b; Huang et al., 2022) aims to decompose a high-level goal (Task: Watch TV) into a sequence of temporally extended steps (Procedural Plan: Step at all five time-steps).
We study procedural planning as the conditional text generation problem since it resembles real-world scenarios. Previous approaches (Huang et al., 2022; Ahn et al., 2022) require a small number of carefully written or held-out exemplars to acquire procedural knowledge. However, these manual exemplars evolved from task data are impossible to cover the ever-changing task setups and the flexible dependency relations among goals and steps. In fact, the biased data may cause the model to learn spurious correlations and hinder the model from generalizing well in zero-shot scenarios. Studies in cognitive science show that humans rely on chunking mechanisms (Gobet et al., 2001; Miller, 1956) which turn primitive stimuli into conceptual groups to solve novel and complex problems. Inspired by this, we hypothesize that generalizable procedural planning ability can be achieved by learning cause-effect relations among complex goals and simpler steps using external knowledge.
To reveal the cause-effect relations in procedural planning, we devise a Structural Causal Model (SCM) (Peters et al., 2017), a directed acyclic graph commonly used to describe the causal relationships within a system Pearl (2009). As depicted in Figure 2, the pre-trained knowledge (D) (e.g., TV and living room is highly correlated) in LLMs confounds (D influences T , Si−1 and Si, resulting in spurious correlations) the system to make biased decisions toward an unreasonable step (e.g., Find
1Source code and datasets are publicly available at https://sites.google.com/view/iclr-clap
Television). Thus, we adopt front-door adjustment (definition in Appendix A.3), which utilizes a mediator (Pi) that blocks all directed paths from the cause (T or Si−1) to the effect (Si). In this way, T (or Si−1) affects Si by flowing through indirect paths: T (or Si−1) affects Pi and Pi affects Si. And we can identify the causal effects among goals and steps by investigating the indirect effect (Equation 3), which is computed by multiplying the effect of T (or Si−1) on Pi−1 (Equation 1) with the effect of Pi on Si (Equation 2). With the above front-door adjustment, we can mitigate the spurious correlations (e.g., between ”television” and ”living room”) and thus make reasonable decisions on steps (e.g., Find book). Please refer to A.1 for causal preliminaries (including explanation for SCM, confounder, mediator, spurious correlations), and A.3 for the front-door adjustment definition.
Guided by the above causal analysis of procedural planning, we need to construct the mediator Pi and then intervene on task T and prompt Pi, which is required to compute the conditional probability in Equation3. As depicted in Figure 3, we seek to automatically construct commonsense-infused prompts as the mediator Pi by concatenating the task, previous steps with commonsense knowledge extracted from external resources (e.g., ConceptNet (Speer et al., 2017)). First, we modify the goal input by sampling a task-relevant knowledge subgraph (Stage1 in Section 3.1) to implement interventions on T . Then, we modify the prompt by adapting the edge weight to implement interventions on Pi (Edge-Wise Adoption of Stage2 in Section 3.1). However, directly incorporating knowledge of graph structure into LLMs leads to the loss of the logical order in eliciting procedural knowledge from LLMs. Thus, we apply symbolic executors (Mao et al., 2019; Yi et al., 2018) that execute the sequential mapping program on latent knowledge representations (e.g., the subevent of). In this way, we transit graph structure knowledge into natural language that preserves procedural structure, such as the sequential order of two low-level steps (Symbolic Structuring of Stage2 in Section 3.1). The procedural prompt PG (e.g, “please get the remote control”) is further translated into admissible one P̂G (e.g., “grab remote control”) from available steps in a certain domain (RobotHow or WikiHow in our case). Finally, we utilize the commonsense-infused prompt P̂G to control the generation of procedural plans in LLMs in a zero-shot setting (Section 3.2).
We conducted experiments on RobotHow (Puig et al., 2018) and WikiHow (Koupaee & Wang, 2018) under original and counterfactual situations. Our major contributions can be summarized as:
• We develop the first causal framework for procedural planning by 1) defining a temporally extended Structural Causal Model and 2) resolving spurious correlation between high-level goals and low-level steps via front-door adjustment with a prompt-based mediator. • We propose a neuro-symbolic approach to construct commonsense-infused prompts for LLMs to tackle the procedural planning task without manual exemplars or further training. • Extensive evaluations show the superiority of PLAN in terms of reasoning about the causeeffect relations among goals and steps and achieving promising planning ability.
2 EXTERNAL KNOWLEDGE MATTERS IN PROCEDURAL PLANNING
As depicted in Figure 1, procedural planning requires generating the Plan (e.g., Step 1: Walk to the living room.) conditioned on the Task (e.g., Watch TV). We first describe the problem definition
and then show why external knowledge matters in procedural planning through the lens of causality. Finally, we show how we elicit procedural ability from the Large Language Models (LLMs).
2.1 PROBLEM DEFINITION
Given the high-level task T (e.g. watch television in the living room) sampled from a task domain MT (e.g. RobotHow), a procedural planner aims to decompose it into lower-level temporally extended steps ST = {S1, ..., Si|Si ∈ S̄}. There exists certain admissible plans S̄, which is a fixed set constrained by the task domain MT (e.g., the affordance of the interacted objects). The plan Si at timestep i is generated as π(Si|T, S0:i−1).
2.2 A CAUSAL LOOK AT PROCEDURE PLANNING WITH LLMS
We seek to empower the LLMs with the ability to reason cause-effect relations in procedural planning. Thus, we devise a causal framework by first defining a Structural Causal Model (SCM) of procedural planning in Figure 2. The SCM describes the temporal dynamics and procedural cause-effect relationship. Our causal assumption in SCM indicates that there is a backdoor path from task to step, which must be blocked with front-door adjustment. Therefore, we model the input prompt as a mediator which is created from external knowledge. More specifically, we define our Full Temporal Causal Graph as in Figure 2a, which is an unrolled Structural Causal Model (SCM) for sequential decision-making. Our goal is to identify the causal relations between the attended task T and plan procedures ST = {S1, S2, . . .} from LLMs. Initially, there are direct paths T → Si and Sk → Si, k < i because Si relies on the LLM attended task entities and previous accomplished steps. D is an unobserved confounder from learned knowledge during pre-training. D builds a backdoor path between T and Si and misguides the LLMs to attend to false entities to generate the next step (see Fig. 2b). Note that D is unobservable as we directly adopt the LLM without knowing the pre-training data. To mitigate the spurious correlation, we then introduce a mediator Pi for each Si as shown in Figure 2a. To achieve our front-door adjustment, we inject external knowledge into LLMs with a neuro-symbolic approach by adopting three stages described in Section 3.1.
3 OUR APPROACH
Although LLMs have strong general language intelligence, they still perform poorly in reasoning the cause-effect relations in procedural plans due to a lack of daily life experience. We propose to elicit the unbiased procedural planning knowledge from the LLMs using the created commonsense-infused Prompt P as π(Si|T, S0:i−1, P ). Figure 3 and Algorithm 1 depict how PLAN tackles the procedural
planning in a five-stage manner. We illustrate the commonsense-infused prompt construction (the first three stages) in Section 3.1 and planning with LLMs (the last stage) in Section 3.2.
3.1 COMMONSENSE-INFUSED PROMPT CONSTRUCTION
Overview Inspired by the causal analysis in Section 2.2, we propose to construct commonsenseinfused Prompt P that helps reveal the cause-effect relations among the goals and steps during procedural planning within 3 stages: 1) Stage1 sample a subgraph Gs from the external knowledge base G by extracting task(T )-relevant nodes. 2) Stage2 adapt the edge weight Ew in Gs and apply symbolic structuring to get the admissible knowledge prompt P̂G. 3) Stage3 acquire the temporal order by temporally aggregated the prompt Pi with previous steps S0:i−1.
Stage1:Task-Relevant Knowledge Subgraph Sampling First, we investigate the causal effect T → Pi and Si−1 → Pi (Figure 2). Si is a collider that blocks the association between D and Pi in the path T ← D → Si ← Pi. Let πi denote π(·|Pi−1) that represent the probability density function conditioned on Pi−1. Since there is no backdoor path for T → Pi and similarly for Si−1 → Pi, we simply have the conditional probability after applying do-operators:
πi(Pi = p|do(T )) = πi(Pi = p|T ), πi(Pi = p|do(Si−1)) = πi(Pi = p|Si−1) (1)
We achieve the do-operation in a prompting way by modifying the goal input so that the model attends to the task-relevant entities. To implement, we use NLTK to tokenize and pos tag the task text T . Then we use the noun (e.g. television), noun phrases (e.g. remote control), and verb phrases (e.g. watch television) as the concept node. In this way, the task name T is Semantically Parsed into the Concept Set TE . Each concept e ∈ TE is used as a query for sampling the H-hop task-relevant subgraph Gs ⊆ Ne×Rs×Ne from the external knowledge base G ⊆ N ×R×N , whereN andR represent the number of concept nodes and commonsense relations respectively. When extracting Gs, we keep the triplets with relation type in the household domain (e.g., AtLocation, UsedFor) and filter out ones in the linguistic domain (e.g., DistinctFrom, DerivedFrom) for the procedural planning task. Ne is maintained in a set of top-k task-relevant nodes using the weight of each Re, which is updated with edge-wise adaption in Stage2.
Stage2:Edge-Wise Adaption and Symbolic Structuring Second, we need to find the causal effect for Pi → Si. Since the path Pi ← T ← D → Si contains a backdoor from Pi to Si, we cannot rely on the conditional probability. Instead, we intervene on Pi using do-operator to cut off D → T :
πi(Si|do(Pi = p)) = ∑ t,s πi(Si|p, T = t, Si−1 = s)πi(T = t, Si−1 = s)
= ∑ t,s πi(Si|p, T = t, Si−1 = s)πi(Si−1 = s|T = t)πi(T = t) (2)
The retrieved concept-centered graph has multiple edges representing various relationships with other actions/entities. Therefore, the summation over intervened T can be achieved by incorporating these edges into the prompt. For instance, “living room” can be “walked to” and “used for reading” while “book” can locate in “living room” and “bedroom”. Similarly, we extrapolate over the edges for i − 1 hops to aggregate the intervened Si, i.e. P (Si−1 = s|T = t). Directly ranking the retrieved nodes Ne with the annotated weight (Ew) in the external knowledge base will result in a spurious correlation. Because such retrieved local subgraphs tend to capture the task-invariant concept nodes as the causal factors. To mitigate this, we propose to adapt the weight of each triplet (Edge-wise Adaption). The adapted weight is the addition of the original edge weight and the cosine similarity between the tail node embedding nEtail of the edge Re and the task embedding vtask as: Êw ← Ew + cosine(nEtail ,vtask). The embeddings are projected from the node text and task name using the sentence-transformer (Reimers & Gurevych, 2019). The nodes Ne are finally retrieved by ranking the adapted weight Êw. To better track the utilized external knowledge during inference, we construct the task-dependent commonsense prompt with a Symbolic Executor (Symbolic Structuring) guided by the relation type of each triplet in Gs with the adapted edge weight beyond threshold θe. Specifically, the Symbolic Executor acquires the neural information of each natural language node and executes the sequential mapping program by sampling the operation Op from the Symbolic Rule Set R according to the edge relation type. The Symbolic Rule Set R is obtained by mapping the description of the relations (e.g., AtLocation represent ‘A is a typical location for B, or A is the inherent location of B. Some instances of this would be considered meronyms in WordNet.’) in the external knowledge graph (e.g., ConceptNet) to symbolic operations (e.g., Op AtLocation). For instance, the AtLocation edge samples the operation Op AtLocation from R, which takes the commonsense relation of the triplet from Gs as the parameters to query the procedural concept output given the natural language meaning of the linked nodes (e.g., go to the location of Start Node Of(re) in this case). Similarly, Op UsedFor may refer to ”go to find End Node Of(re) and use it for Start Node Of(re)”. And operators Op HasSubevent and Op HasPrerequisite will recursively navigate the subgraph Gs. After navigating the subgraph, we linearize the transformed triplets as the Procedural Prompt PG, which is then translated to Admissible Knowledge Prompt P̂G by the Translation Language Model LMT .
Stage3:Temporally-Extended Aggregation To acquire temporal order in the procedure, we obtain the Prompt P at timestep i with the aggregation of task T , history steps S0:i−1 and current external knowledge P̂G. The underlying causal mechanism is a combination of Eq. 1 and Eq. 2:
πi(Si|do(T ), do(Si−1)) = ∑ p πi(Si|do(Pi = p))πi(p|do(T ), do(Si−1))
= ∑ p πi(p|T ) ∑ t,s πi(Si|p, T = t, Si−1 = s)πi(T = t, Si−1 = s) (3)
The adjustment and marginalization in Eq. 3 is achieved in the input space by forming the Procedural Prompt PG that allows the LLM to attend on the causal entities instead of the highly correlated ones for the next step generation. The LLM can reason over the most relevant edges to link the concepts with the task entities as a context. The prompts from knowledge bases are independent of the pre-training data distribution so that Pi is independent of D and satisfies the front-door criterion. Please refer to Appendix A.3 and Figure 4 for the simplification of our structural causal model.
3.2 PROCEDURAL PLANNING WITH LARGE LANGUAGE MODELS
Stage4:Semantic Generation The external knowledge is further concatenated with the goal input (T ) as the initial prompt. Given the prompt, the language model Generation LMG ∈ {PAR, PAE} (e.g., GPT3, BART) generates the next sentence, and the most confident prediction is then appended to previous prompts. The Termination Condition is either reaching the max step t or the matching score is below threshold θ. The joint probabilities of auto-regressive (PAR) and auto-encoder (PAE) model is factorized as:
πAR(x) = n∏ i=1 p(sn|P̂G, s1:n−1, T ), πAE(x) = n∏ i=1 p(sn|P̂G, {s1:n−1, [MASK]}, T ) (4)
where P̂G represent the commonsense knowledge and T represent the task name.
Algorithm 1 Neuro-Symbolic Procedural Planning using Commonsense-Infused Prompting
Require:
Task Sample T , Admissible Step Set S, External Knowledge Graph G; Language Model for Generation LMG and Translation LMT , Symbolic Rule Set R;
Ensure: 1: [Stage1] Semantically parse T into entity set TE ; 2: Maintain top-k task-relevant nodes Ne in TE ; 3: Retrieve subgraph Gs ⊆ Ne ×Rs ×Ne from G ⊆ N ×R×N for each e ∈ TE ; 4: [Stage2] Edge-wise adaption as Êw ← Ew + cosine(nEtail , vtask) and re-rank Ne in TE ; 5: Map the description text of the relationsRs in Gs as Symbolic Rule Set R; 6: Construct procedural prompt PG by verbalizing the re-weighted Gs using R; 7: Translate PG in Admissible Knowledge Prompt P̂G = LMT (PG);
Temporally-extended zero-shot inference for Procedural Plan ST = {S1, ..., Si}: 8: for each timestep i do 9: [Stage3] Aggregate Prompt Pi ← [T ;S0:i−1; P̂G];
10: [Stage4] and [Stage5] Si = LMT (LMG(Pi)); 11: Update Procedural Plan ST ← Si; 12: end for
Stage5:Admissible Step Translation To ensure that the generated procedural plans are grounded to the environment, we should avoid producing the steps that are inadmissible (e.g. Toast the table). In other words, the generated steps should be fully constrained to the admissible composite of action and object in a certain task domain. Thus previous works (Huang et al., 2022; Ahn et al., 2022) have explored using the model (which is LMT in our case) to score a step selected from a fixed set of available options, instead of directly sampling from the output distributions of the language model (which is LMG in our case). Specifically, we match the generated step by LMG to the most similar admissible step in the embedding space encoded by the Translation Language Model LMT . Following (Huang et al., 2022), we utilize a Sentence-Transformer (Reimers & Gurevych, 2019) to calculate the cosine similarity as π(si|x) = LMT (LMG(x)), which translates LMG(x) into the admissible step si ∈ S̄ that is the closest in the embedding space measured by the cosine similarity.
3.3 COUNTERFACTUAL PROCEDURAL DATA CONSTRUCTION
To investigate the counterfactual reasoning ability, we design three families of intervention methods: 1) Initial Configuration: intervene in the initial configuration, such as the location for implementing the task. 2) Intermediate Step, randomly select one step from the ground truth program as an additional constraint of implementing the task and append it to the task name for generating the procedural plan. 3) Final Goal, intervene the task goal as the composite of another randomly sampled task. Table 5 in the Appendix summarizes the category and description. The counterfactual dataset construction details and post-intervention examples are provided in Appendix B.2.
4 EXPERIMENTS
4.1 PROCEDURAL PLANNING SETUP
Datasets We conduct zero-shot experiments on two datasets with procedural information, WikiHow (collected following (Koupaee & Wang, 2018)) and RobotHow (Puig et al., 2018) without training. WikiHow is a large-scale text summarization dataset that is constructed from a human-written knowledge base, involving procedural tasks that spans various topics. We utilize “how to” title as the task names and the summarized headlines as the steps. RobotHow is a large knowledge base of common household tasks collected in the VirtualHome (Puig et al., 2018) simulator. The dataset contains the programs with high-level task names and low-level steps. MT is composed of 292 and 2000 distinct tasks from RobotHow and WikiHow respectively. Human evaluations use randomly sampled 50 task examples for each dataset. Automatic evaluations use 150 and 1000 task examples randomly sampled from RobotHow and WikiHow respectively. Please refer to Appendix B.1 and Appendix B.2 for dataset details.
Baselines We compare our approach with three vanilla generative pre-trained language models (BART, GPT2, and GPT3) and two powerful generation baselines (Zero-shot Planner (Huang et al., 2022) noted as “LLMaP” and Chain of Thought (Wei et al., 2022) noted as “Chain”). More method and configuration details of the models can be found in Appendix B.3 and Appendix B.4.
Metrics We ask human annotators on the Amazon Mechanical Turk platform to rate model performance on two aspects: 1) Coverage: depicts which set of steps can better complete the target task (captures semantic completeness). 2) Order: depicts which sequence covers more steps that are necessary to complete the target task (captures sequential order correctness). In addition, we use Sentence-BLEU (S-BLEU) (Papineni et al., 2002), BERTScore (Zhang* et al., 2020), ROUGE1 (Lin, 2004) and Word Mover’s Distance (WMD) (Kusner et al., 2015) as automatic evaluation metrics. These metrics are used to compute the semantic scores between the annotated programs and the predictions. Details of the crowdsourcing human evaluation can be found in Appendix C.1.
4.2 HUMAN EVALUATION RESULTS WITH COVERAGE AND ORDER METRIC
Each example is rated by 3 crowdsourcing annotators. For the Win-Lose Comparison, we ask the human rater to choose between ours and the baseline LLMaP (Huang et al., 2022). Averaged results reported in Table 1 show that our PLAN is more frequently rated as better for both coverage and order metrics, outperforming baselines over the winning ratio by 21% in coverage and 26% in order, across two datasets. We report the average results of Human Ratings with 5-point Likert scale in Table 2. The consistent performance boost of PLAN indicates the superiority of injecting external commonsense knowledge into the procedural planning task. The performance drop of LLMaP and Chain in the counterfactual setting indicates the vulnerability of fixed holdout knowledge and the pre-defined manual exemplars in causal procedural planning. Please refer to Appendix C.1 for the crowdsourcing human evaluation interface details. Table 3 shows two examples for Qualitative Comparison. More examples can be found in Appendix D.
4.3 AUTOMATICALLY MEASURING THE PROCEDURAL PLANNING
Main Results Table 4 summarizes The automatic evaluation results. PLAN achieves the best results regardless of the architecture of the language model architecture, either autoregressive or autoencoder based. The performance gain of “LLMaP” over “Chain” may probably be due to direct exposure to the holdout task from the dataset. While the “Chain” baseline still outperforms the vanilla baseline that only takes the high-level task name as the prompt. Note that the annotated program is not the
only solution, thus these automatic metrics provide limited absolute performance information. Details for the correlation between automatic metrics and human evaluation can be found in Section 4.5.
Effects of Edge-wise Adaption and Symbolic Program Execution The variant “w/o Adaption” maintains the top-k task-specific nodes ranked by the annotated weight EW in the external knowledge base G without adaption. The variant “w/o Symbolic” directly takes the extracted concept nodes from external knowledge base as prompt. The performance drop of these two variants in Table 4 with significance test in Appendix C.2 demonstrate the importance of adaption and symbolic modules.
Effects of the Large Language Model Architecture We use GPT2 and GPT3 as autoregressive architecture and BART (Lewis et al., 2020) as autoencoder architecture. The autoregressive architecture achieves better results than the autoencoder one. Since the pre-training objective of autoregressivebased GPT is to predict the next token given the previous input tokens. We assume the performance gain of GPT is due to a smaller gap between the objective of pre-training and procedural planning.
Level of Complexity We show report results that use the test set which is separated into several buckets according to the number of steps in the procedural planning task. The step number reflects the difficulty of the task. In Table 7 and Table 8 in Appendix C.2, we show that the averaged performance gain of PLAN over the baselines are consistent or more significant in more complicated procedural planning settings. This indicates the superiority of PLAN in solving long-horizon tasks.
4.4 RESULTS ON COUNTERFACTUAL TASK SAMPLES
We apply Initial Configuration, Intermediate Step, Final Goal interventions on RobotHow and Intermediate Step on WikiHow. Human evaluations under counterfactual setting are summarized in Table 1 and Table 2. PLAN consistently outperforms baselines by a large margin and experiences
a much smaller performance drop compared with the powerful baselines when switching to the counterfactual setting. We assume it’s due to the biased knowledge of the holdout examples and manual exemplars utilized in the baselines, which are vulnerable to counterfactual samples. Automatic evaluations on counterfactual RobotHow are summarized in Table 13 in Appendix C.2. Aligned with human evaluations, PLAN achieves the best performance. The overall poor performance in Final Goal category indicates the challenge for long-horizon and composite procedural planning. While the overall better performance in Intermediate Step category benefits from the intermediate guidance.
4.5 CORRELATION BETWEEN AUTOMATIC AND HUMAN EVALUATION
We evaluate segment-level Pearson Correlation between human and automatic metrics. We observe that BERTScore has a moderate correlation to the human coverage score and WMD has a moderate correlation to the human order score, with 23.3% and 32.3% respectively. Similar to the prior findings (Xu et al., 2021), n-gram-based metrics (Sentence-BLEU and ROUGE) have a relatively weaker correlation to the human coverage score, with a Pearson correlation of 16.4% and 21.1%. Overall, our automatic and human evaluation scores are consistent with the main claim of this paper. However, human evaluation is still irreplaceable for procedural planning at the current stage.
5 RELATED WORK
Procedural Planning Learning to generate procedural plan (Zhang et al., 2020a; Lyu et al., 2021; Zhang et al., 2020b; Chang et al., 2020; Wu et al., 2022; Huang et al., 2022) is important for embodied agentTellex et al. (2011); Jansen (2020); Ahn et al. (2022) and conversational assistants (Ilievski et al., 2018; Yang et al., 2022). Previous work views procedural script learning as a structured form of commonsense knowledge Gupta et al. (2004); Regneri et al. (2010); Wanzare et al. (2016), while more recent work strengthens its association with the changing environments for executable action planning Puig et al. (2018); Shridhar et al. (2020). Some works (Sun et al., 2020; Zhao et al., 2021) explore to utilize human written programs to precisely specify tasks. Our method tackles the problem with aware of cause-effect by utilizing commonsense-infused prompts via a neuro-symbolic approach (Mao et al., 2019; Nye et al., 2021; Yi et al., 2018) for zero-shot procedural planning.
Causality for Language Generation The integration of causality and machine learning has been an intriguing topic for many problems Pearl (2009); Schölkopf (2022). Previous studies focusing on causal inference for natural language understanding Chen et al. (2020); Keith et al. (2020); WoodDoughty et al. (2018) and generating counterfactual text representations Feder et al. (2021). Weber et al. (2020) proposes an intervention method for script learning. However, these methods cannot be directly applied to procedural planning which requires a formal structure. Our method is based on mediation analysis VanderWeele (2015) and causal intervention Pearl (2009); Peters et al. (2017).
Prompt for Large Language Model There is an emerging interest in using prompts to extract knowledge from large language models (Chen et al., 2022; Le Scao & Rush, 2021; Su et al., 2022; Ye et al., 2022; Zhou et al., 2022; Kojima et al., 2022). Cao et al. (2022) treats the prompt as a cause of the task-specific predictor and investigates biases in prompt-based probing evaluations. Chain of thought Wei et al. (2022) discovers that LLM can perform better on reasoning tasks when the prompt is designed as a series of short sentences that mimic the reasoning process of humans.
6 CONCLUSION AND FUTURE WORK
Procedural planning is a newly emerged research area of great importance to various applications, such as household robots and virtual assistants. We propose a neuro-symbolic procedural PLANner (PLAN) with commonsense-infused prompts elicited from the external knowledge base to solve the procedural planning problem in a zero-shot manner without human annotated exemplars. Experiments show the effectiveness of our proposed PLAN under both origin and counterfactual settings, indicating the capability of mitigating spurious correlation by injecting external knowledge in LLMs. Though, procedural planning over long-horizon and composite tasks remains challenging. And exploring multimodal learning and developing human-aligned evaluation metrics are promising future directions in this area.
7 ETHICAL STATEMENT
Given the limited diversified cultural background of the dataset we are using from RobotHow and WikiHow, we assume our results may be biased toward a single cultural background. For instance, given the task ”make breakfeast”, it should take multi-culture into consideration to generate the procedural plans.
8 REPRODUCIBILITY STATEMENT
We provide more data samples and qualitative samples in supplemental materials. In addition, we provide our code implementation at https://anonymous.4open.science/r/PLANNER-7B24 to reproduce our experiments. The Preprocess folder provides the utils to construct the data. The Evaluation folder provides the code for automatic and human evaluation tools. The Planning folder contains the main code for our approach and reproduced planners for procedural planning. The Visualization folder provides the code we use to visualize in the environment.
ACKNOWLEDGMENTS
The research was sponsored by the U.S. Army Research Office and was accomplished under Contract Number W911NF-19-D-0001 for the Institute for Collaborative Biotechnologies. This work was also supported by the National Science Foundation award #2048122. We thank the Robert N.Noyce Trust for their generous gift to the University of California via the Noyce initiative. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
Appendix
Table of Contents
A SCM Theoretical Details 16
A.1 Causal Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 A.2 The Backdoor Adjustment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 A.3 The Front-door Adjustment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
B Implementation Details 19 B.1 Original Dataset Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 B.2 Counterfactual Dataset and Experiment Details . . . . . . . . . . . . . . . . . . 20 B.3 Method Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 B.4 Hyperparameter Search and Configuration Deicision . . . . . . . . . . . . . . . 21 B.5 Computation and Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
C Evaluation Details 21
C.1 Crowdsourcing Human Evaluation . . . . . . . . . . . . . . . . . . . . . . . . 21 C.2 More Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
D Qualitative Examples 29
D.1 Intermediate Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 D.2 Predicted Procedural Plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
E Discussion 34
E.1 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 E.2 Failure Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 E.3 Ethical considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
A SCM THEORETICAL DETAILS
A.1 CAUSAL PRELIMINARIES
The Structural Causal Model (SCM) is a directed acyclic graph (DAG) to describe the causal relationships within a system Pearl (2009). In this paper, we refer to the unrolled SCM along the time dimension as the full temporal causal graph, while the rolled-up version is also called the causal summary graph Peters et al. (2017). In an SCM, if the variable D is a cause of both T and Si, then it is called a confounder. A confounder opens up a backdoor path and causes a spurious correlation between T and Si. The backdoor path is defined as the remaining path between T and Si when all the arrows pointing out of T are removed. Therefore, T ← D → Si is a backdoor path. For our SCM with mediator Pi shown in Figure 4c (same as Figure 2b) from the main paper, there is no backdoor path between T and {Pi, Si−1} because only D → T is left after removing outgoing arrows of T . On the other hand, there is a backdoor path between Pi and Si, i.e. Pi ← T ← D → Si so that Pi indirectly affects the observation of Si through {T, Si−1} and D. The mediator is the variable added between treatment variable (the cause T and Si−1 in our case) and treatment variable (the effect Si in our case), and thus blocks all directed path from the cause to effect ( (Zhang et al., 2016)). The spurious correlations happens when two variables are statistically related but not causally related because of a third variable influences these two variables at the same time or the correlation is coincidental.
To identify the true causal effect between X and Y , we aim to estimate the conditional π(Y |do(X)) after intervention with the do-operator. The do-operator is to break the backdoor path by setting X to a fixed value independent of Z. Then the path Z → X can be removed to eliminate the backdoor paths. In practice, the backdoor adjustment and front-door adjustment are two fundamental methods to implement interventions and obtain the conditional π(Y |do(X)). Clarity of the Definition As a language prompt, Pi inherits the content from Pi−1 and thus can be detached from steps before Si−1 for simplicity.
Causal Intervention There are two types of operation to control the confounding bias: the backdoor adjustment and the front-door adjustment (Pearl, 2009). The backdoor adjustment is intractable in our case because it requires the prior distribution of the confounding variables. On the other hand, we can construct an input prompt as a mediator Pi for T → Si and Si−1 → Si. Then the front-door adjustment applies a two-step do-operation to mitigate bias by investigating P → Si (Pearl, 2009). Specifically, we construct the prompt mediator Pi using techniques illustrated in Section 2.2.
The pre-trained knowledge (D) in LLMs confounds language models to make biased decisions toward an unreasonable action. Since the confounder is unobservable, intervention techniques such as back-door (definition in Appendix A.2) adjustment (Hu & Li, 2021; Weber et al., 2020; Yue et al., 2020) are not applicable in our SCM. Instead, we build a mediator and implement it as a commonsense-infused prompt. Through the mediator, we can identify causal effects among goals and steps by investigating the indirect effect from the goals, which is essentially the front-door adjustment (definition in Appendix A.3) in causality (Pearl, 2009).
A.2 THE BACKDOOR ADJUSTMENT
The backdoor adjustment is one way to realize the intervention do(T = t) by considering the conditional probability over the existing data distribution with observed confounder D. Let πi denote π(·|Pi−1) that represent the probability density function conditioned on Pi−1. It calculates the average causal effects by considering all stratums of the dataset:
πi(Si|do(T )) = ∑ d πi(Si|T,D = d)πi(D = d) (5)
However, for LLMs, the pretraining data is usually unobservable and has been transformed as knowledge incorporated into the hidden space. Therefore, we are not able to directly apply the backdoor adjustment.
A.3 THE FRONT-DOOR ADJUSTMENT
The front-door adjustment is another technique to apply intervention by introducing a mediator Pi when the confounder is unobservable. As is explained in Section 2.2 from the main paper, the front-door adjustment is equivalent to two consecutive do-operations on task T and prompt Pi. We first investigate the generation of S1 and then expand it to St.
Timestep i = 1 As is shown in Figure 4a, since there is no preceding steps, the first step generation involves D, T and P1 only. Similar to the proof in Section 2.2 from the main paper, we have:
πi(S1|do(T )) = ∑ p πi(S1|do(P1 = p))πi(p|do(T ))
= ∑ p πi(p|T ) ∑ t πi(Si|p, T = t)πi(T = t) (6)
By adding intervention to T , we make the value of do(T = t) independent of the confounder D at the beginning. The backdoor path through D → T is eliminated as a result.
Timestep i > 1 As is shown in Figure 2a from the main paper, we model the mediator P1 as an effect of three variables, T , Pi−1 and Si−1. The first step of our front-door adjustment is to apply the do-operator on the three variables and observe the change in Pi as explained in Section 2.2 from the main paper. Since there are no backdoor paths between Pi and these variables, we have the probability after intervention equal to the conditional probability without intervention:
πi(Pi = p|do(T )) = πi(Pi = p|T ) (7) πi(Pi = p|do(Pi−1)) = πi(Pi = p|Pi−1) (8) πi(Pi = p|do(Si−1)) = πi(Pi = p|Si−1) (9)
The second step is to apply do-operator on Pi and then identify the causal effect as: πi(Si|do(Pi)) = ∑ t,p′,s ( πi(Si|Pi, T = t, Pi−1 = p′, Si−1 = s)
πi(T = t, Pi−1 = p ′, Si−1 = s) ) (10) Combining Equation7-9 and Equation 10, we have the front-door adjustment. Note that there are three backdoor paths from each of the variables T , Pi−1, and Si−1, as is shown in Figure 4b (drawn
in blue, red and purple). More importantly, the one through T , i.e. Pi ← T ← D → Si (the blue path in Figure 4b) and the one through Pi−1, i.e. Pi ← Pi−1 ← T ← D → Si (the red path in Figure 4b) shares the same subpath. The intervention on the task T breaks the backdoor paths for both T and Pi−1. Therefore, we have our front-door adjustment as
πi(Si|do(Si−1),do(Pi−1), do(T )) (11) = ∑ p πi(Si|do(Pi = p))πi(p|do(Si−1), do(Pi−1), do(T )) (12)
= ∑ p πi(Si|do(Pi = p))πi(p|do(Si−1), Pi−1, do(T )) (13)
= ∑ p πi(Si|do(Pi = p))πi(p|do(Si−1), do(T )) (14)
= ∑ p πi(p|Si−1, T ) ∑ s,t πi(Si|p, Si−1 = s, T = t)πi(Si−1 = s, T = t) (15)
= πi(Si|do(Si−1), do(T )) (16)
We have Equation 13 because of the intervention on T and Rule 2 (Pearl, 1995), Equation 14 because of Rule 1 (Pearl, 1995). After simplification based on Equation 12-16, we get the SCM at timestep i > 1 in Figure 4c. This is an equivalent SCM after eliminating Pi−1 in Figure 4b. The reason we could eliminate Pi−1 is as follows. We follow a common method of constructing temporally-extended prompt, which is to append the prediction at previous timesteps to the prompt at current timestep. In our case, the PG,i is the same as PG,i−1, thus Pi inherit part of the content from Pi−1, the change only depend on the Si−1. Thus Pi−1 and Si−2 are fixed, and there is no need to predict Pi−1 at timestep i again. In this way, we simplify the causal graph in Figure 4b to the one in Figure 4c. In summary, we define and simplify the causal graph based on the temporal-extended property of our prompt construction (Pi inherit the content from Pi−1). We end up with Equation 14-16 which is shown as Equation 3 in Section 2.2 from the main paper.
B IMPLEMENTATION DETAILS
B.1 ORIGINAL DATASET DETAILS
RobotHow This dataset is Attribution-NonCommercial-ShareAlike 4.0 International Creative Commons License. We evaluate the inference of 150 tasks by random selection from the dataset. Each program contains the task name, task description and steps. We use the task name and sequence of steps as our input and output references.Each step is a composition of [Action], [Object] and [Number]. For example, the sequence of steps of the task ”Watch TV” are: 1. [Walk] <TELEVISION> (1) 2. [SwitchOn] <TELEVISION> (1) 3. [Walk] <SOFA> (1) 4. [Sit] <SOFA> (1) 5. [Watch] <TELEVISION> (1).
WikiHow This dataset2 is under an Attribution-Noncommercial-Share Alike 3.0 Creative Commons License. And the text content is free to modify, republish and share. We evaluate the inference of 1000 tasks by random selection from the dataset. The admissible action space and interaction object space are more complex than the programs in RobotHow. And there is no fixed ”[Action] ¡Object¿ (Number)” form of each step. For each article, it contains the title, the bold headlines and text. We utilize the title and headlines as our task name and steps respectively.
External Knowledge Base For the external knowledge base, we utilize ConceptNet to leverage commonsense reasoning ability to help ground language generation in goal-guided procedural text generation. ConceptNet (Speer et al., 2017) captures commonsense knowledge explicitly with triplets of (head node, relation, end node). It contains 799, 273 nodes and 2, 487, 810 edges that represent both symmetric and asymmetric relations. Specifically, the core relations we utilized are Synonym, AtLocation, CapableOf, Causes, CausesDesire, HasPrerequisite, HasSubevent, and UsedFor. Since we are looking at the commonsense knowledge in house-holding tasks, so we filter out the relations (/r/DistinctFrom, /r/DerivedFrom, /r/SymbolOf, /r/EtymologicallyRelatedTo, /r/EtymologicallyDerivedFrom) that are related to the linguistic.
2https://www.wikihow.com
B.2 COUNTERFACTUAL DATASET AND EXPERIMENT DETAILS
Table 6 show the examples that compare the original program and the counterfactual program of each intervention method are also provided. Specifically, for Initial Configuration, we randomly append the location to a given task name to constrain the location of completing the task. The steps are prepended with the initial step ”walk to ¡Location¿”. For Intermediate Step, we randomly sampled a step from the task-specific program and append it to the task name to constrain the way to implement a given task. For Final Goal, we randomly combine two tasks by combining both the task names and the programs to construct a set of long-horizon composite tasks.
We conduct counterfactual experiments by applying randomly selected intervention methods over RobotHow. And we only apply the Intermediate Step intervention method over WikiHow due to the loose configuration requirement and the long text of the WikiHow contents. Note that the performance gain of PLAN under the counterfactual setting mainly comes from the additional guidance of the task introduced from the Intermediate Step intervention method. However, the baselines mostly experience performance drops due to the limited annotated exemplars. PLAN consistently outperforms baselines by a large margin, indicating its superiority under the counterfactual setting.
B.3 METHOD DETAILS
The existing formalization of the procedural planning task can be mainly categorized as 1) sequential choice making (Lyu et al., 2021; Wu et al., 2022; Zhang et al., 2020a;b), which reasons about the next step from the options given, the task, and previous steps; 2) conditioned generation (Huang et al., 2022; Ahn et al., 2022), which generates the temporally extended plans to implement the task. We study the procedural planning task as the conditioned generation problem (Huang et al., 2022; Ahn et al., 2022) since it resembles real-world scenarios.
Baselines LLMaP propose a procedure to extract temporally extended plans from large pre-trained language models. Chain explores manually creating exemplars that mimic the reasoning process
and uses them to prompt large language models for reasoning tasks. To compare with Chain on the procedural planning task, we manually generate exemplars that contain the chain of thought for 1% of the inference task programs. Note that for the BART language model, we use BART-large version. And we use the 1.5 billion parameter GPT-2 (aka gpt2-xl). For the translation model LMT , we use sentence-transformers (RoBERTa-large). All these models are released by HuggingFace. In addition, our experiments with GPT3 (davinci) use OpenAI API (May, 2022).
External Knowledge Graph Conceptnet5 define a set of 34 relations (3). Within the relations we consider in the procedural planning task, the averaged sampling time of subgraph sampling is 0.03576 milliseconds per task program.
B.4 HYPERPARAMETER SEARCH AND CONFIGURATION DEICISION
We perform a hyperparameter search for all evaluated methods for the following hyperparameters.
• The confidence threshold θ, which terminate the generation when below it, is searched in {0, 0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8}.
• The steps horizon, which constrains the maximal number of procedural planning steps, is searched in {10, 20, 40}.
• The number of hops for retrieving the subgraph from the external knowledge base is searched in {1, 2, 3}.
• The ratio of maximal concepts to the length of the task name is searched in {1, 2, 3}. • The cosine similarity threshold for keeping the task-specific concept is searched in {0.4, 0.6, 0.8}.
• The edge weight threshold θe is searched in {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8}. • The top-k task-specific nodes value is searched in {1, 5, 10, 15, 20, 25, 50, 100}.
The configurations used in the experiments are: θ=0.7, 20 step horizon, 3 hops, 3 ratio of concepts to task length, cosine similarity threshold 0.4, θe=0.6 and k=10.
We empirically choose the hop number H as 3 considering both the input length limit of the LLMs and the fact that 3-hop contains reasonable relevant information in practice (Zhang et al., 2022).
B.5 COMPUTATION AND RESOURCES
We use one single NVIDIA A100 GPU Server for all the experiments. Since there is no training in our zero-shot settings, the computation is only used for the inference stage of the experiments.
C EVALUATION DETAILS
C.1 CROWDSOURCING HUMAN EVALUATION
We conduct all the human evaluations (rating and win-lose comparison) on Amazon Mechanical Turk platform. Each example is rated by 3 annotators. We ask Amazon Mechanical Turk workers, for every assignment, to evaluate the quality of the provided low-level steps given the high-level task description. For the Win-Lose Comparison, they were asked to choose one from the two provided model generated results by 1:the first one is better, 2:equal and 3:the second one is better. For the Human Ratings, they were asked to score each sample with 5-point Likert scale. This process does not involve collecting any personal information. And we manually check no offensive content is produced by the models.
The assignment layout templates for workers are shown in Figure 7 and Figure 6. Specifically, we evaluate randomly selected 50 task examples from each dataset (RobotHow and WikiHow) under all the settings (standard and counterfactual). We only collect the examples that the workers read the instructions carefully by checking whether they give 1 score for the empty program as a sanity check. The hourly wage paid to participants is estimated $9. And the total amount spent on participant
3https://github.com/commonsense/conceptnet5/wiki/Relations
compensation is $1296. The details of the Human Intelligence Tasks process are described in the following sections.
C.1.1 WIN-LOSE COMPARISON
During the process of Human Intelligence Tasks, the workers are shown the following instructions: Read the given task and the sequence of steps, determine which set of steps can better complete the target task. In other words, can the task be decomposed into these steps? Please consider the sequential order of the steps.
Then the program to be evaluated is provided as:
Question Task: Study
Sequence 1:: Step 1: Walk to textbook Step 2: Read book Step 3: Walk to book
Sequence 2:: Step 1: Walk to home office Step 2: Find desk
Finally, the workers are asked to score the program by following the instructions below: Select an option: 1 - Sequence 1 is better; 2 - Tie; 3 - Sequence 2 is better
The above example is to evaluate the order metric, for the coverage metric, the same process are conducted, except for the instructions are: Read the given task and the sequence of steps, and determine which sequence covers more steps that are necessary to complete the target task. Please ignore the sequential order of the steps.
C.1.2 HUMAN RATINGS
Similar as the Win-Lose Comparison Human Intelligence Tasks, the workers are shown the following instructions: For every question below, determine whether the task can be completed in any reasonable scenario using the provided steps (Please consider the sequential order of the steps.). You could directly give the lowest score (1) for the empty steps. In other words, can the task be decomposed into these steps? (Please consider the sequential order of the steps.)
Then the program to be evaluated is provided as:
Question Task: Write an email
Sequence of Steps: Step 1: Walk to home office Step 2: Walk to computer Step 3: Find computer Step 4: Turn to computer Step 5: Look at computer Step 6: Walk to computer Step 7: Find chair Step 8: Sit on chair Step 9: Find keyboard Step 10: Grab keyboard Step 11: Find mouse Step 12: Grab mouse Step 13: Type on keyboard
Finally, the workers are asked to score the program by following the instructions below: Use the slider below to indicate how much you agree with the following statement (1 = Strongly disagree, 5 = Strongly agree). If ”sequence of steps” are blank, please directly choose 1 (lowest score). The task can be completed in any reasonable scenario using the provided steps. [SLIDER PROVIDED
HERE]
The above example is to evaluate the order metric, for the coverage metric, the same process is conducted, except for the instructions are: For every question below, determine whether the task can be completed in any reasonable scenario using the provided steps (Please ignore the sequential order of the steps.). You could directly give the lowest score (1) for the empty steps. In other words, can the task be decomposed into these steps? (Please ignore the sequential order of the steps.)
C.2 MORE RESULTS
Significance Test We provide paired-t test (p¡0.05) statistics results for Table 2. On RobotHow, our PLAN significantly outperforms all baselines on Original-Order(BART) and CounterfactualCoverage(GPT2). On WikiHow, our PLAN significantly outperforms all baselines on OriginalCoverage(BART, GPT2), Counterfactual-Coverage(BART, GPT2), and Counterfactual-Order(BART). For the coverage metric under the counterfactual setting, the human-provided program is not significantly better than our PLAN.
We also conduct the paired-t test (p¡0.05) statistics results over the variant “w/o Adaption” and “w/o Symbolic”. Compared with the full model PLAN, the variants experienced a statistically significant
performance drop. Especially on BERTScore-f1, the p-value is 8.884e−13 and 1.4e−8 respectively. This further confirms the importance of the modules.
Results on GPT-3 In addition, we conduct experiments with GPT-3 (davinci version) using OpenAI API. We showcase the comparison in Table 9 and Table 10.
Motivation of Evaluation Metrics Since the nature of the procedural planning task can be opendomain in that the golden plans may not be unique. This leads to the challenge that common automatic metrics proposed in natural language task are not perfect to evaluate procedural planning. The same observations of such challenge to directly judge the system using automatic metrics are discussed in LLMaP(Huang et al., 2022) as well. We assume that the human evaluation on Coverage and Order can reflect how well the procedural plans are close to human annotated program, because the human annotators are required to determine whether the task can be completed in any reasonable scenario using the procedural plans explicitly. Thus we provide both the automatic evaluation and human evaluation on two aspects Coverage and Order, with description in the Metrics paragraph in Section 4.1.
Evaluation on Success Rate Metric To make human evaluations more intuitive, we provide an additional Success Rate metric to show whether the procedural plans can successfully implement the task, which focus more on the success rate instead of the coverage or the order of the plans. We show the Success Rate evaluations on the baselines and our method in Table 11. The assignment layout template for workers is shown in Figure 8.
More Ablation To verify the contribution of the first translation language model LMT that translates the knowledge prompt PG into admissible one P̂G, we conduct an additional ablation experiment by simply removing the first LMT and replacing P̂G with PG to prompt the LLM for procedural planning. We provide results with comparisons to other ablations in Table 12.
Results on Counterfactual Task Samples We show automatic evaluation results on counterfactual RobotHow in Table 13.
D QUALITATIVE EXAMPLES
D.1 INTERMEDIATE OUTPUT
We provide running examples with intermediate output for each module in the following paragraph. First, we show the intermediate output of input task T , the subgraph Gs depicted in the tuple of the start node, relation type, tail node and edge weight, the knowledge prompt PG and the translated one P̂G as below:
• Input task T : Take shower.
• Human-annotated Plan Reference: Step 1: Walk to bathroom. Step 2: Walk to clothes dress. Step 3: Find clothes dress. Step 4: Put off clothes dress. Step 5: Find shower. Step 6: Enter shower. Step 7: Find soap. Step 8: Grab soap. Step 9: Scrub soap. Step 10: Put back soap. Step 11: Leave shower. Step 12: Find towel. Step 13: Grab towel. Step 14: Wipe towel. Step 15: Find clothes dress. Step 16: Put on clothes dress.
• Task-relevant subgraph Gs(Nhead, Re, Ntail, Ew): (take a shower, HasLastSubevent, dry off, 6.0); (bathe, HasLastSubevent, dry off, 6.0); (take a shower, HasPrerequisite, take out your clothes, 4.47); (take a shower, HasSubevent, get clean, 4.47); (take a shower, HasPrerequisite, take your clothes off, 3.46); (go to a party, HasPrerequisite, take a shower, 2.82); (play lacrosse, HasLastSubevent, take a shower, 2.82); (get clean, HasPrerequisite, take a shower, 2.82); (take a shower, MotivatedByGoal, wash your hair, 2.82); (play sports, HasLastSubevent, take a shower, 2.82); (go to the hairdresser, HasPrerequisite, take a shower, 2.82); (take a shower, HasPrerequisite, turn on the water, 2.0); (have a bath, HasLastSubevent, dry off, 2.0); (get wet, HasSubevent, dry off, 2.0); (become more clean, HasLastSubevent, dry off, 2.0); (take a shower, HasSubevent, wash your hair, 2.0); (take a shower, HasLastSubevent, turn off the water, 2.0); (become more clean, HasLastSubevent, dry off, 2.0); take a shower, HasLastSubevent, put your clothes on, 1.0); (take a shower, HasSubevent, use shampoo, 1.0); (take a shower, HasSubevent, wash behind your ears, 1.0); (take a shower, HasSubevent, wash your body, 1.0); (take a shower, HasPrerequisite, go to the bathroom, 1.0); (take a shower, HasPrerequisite, go to the bathroom and undress, 1.0); (take a shower, HasPrerequisite, step into the shower, 1.0); (take a shower, HasPrerequisite, soap up, 1.0); (take a shower, HasPrerequisite, bring some soap, 1.0); (take a shower, HasPrerequisite, bring some shampoo, 1.0); (take a shower, HasLastSubevent, towel yourself off, 1.0); (take a shower, HasPrerequisite, remember to take your towel, 1.0);
• Knowledge prompt PG: Step: take out your clothes. Step: set clean clothes. Step: go to the bathroom. Step: go to the bathroom and undress. Step: take your clothes off. Step: turn on the water. Step: bring some soap. Step: bring some shampoo. Step: remember to take your towel. Step: get clean. Step: wash your hair. Step: use shampoo. Step: wash behind your ears. Step: wash your body. Step: turn off the water. Step: dry off. Step: become more clean. Step: put your clothes on.
• Translated knowledge prompt P̂G: Step: find clothes underwear. Step: wash shower. Step: walk to bathroom. Step: look at shower. Step: take off clothes underwear. Step: wash shower. Step: find soap. Step: grab soap. Step: scrub shower. Step: find shampoo. Step: grab shampoo. Step: find towel. Step: wipe shower. Step: wash hair. Step: wipe shampoo. Step: scrub shower. Step: wash body. Step: switch off washing machine. Step: scrub shower. Step: wipe shower. Step: put on clothes underwear.
• Generated plan ST : Step 1: Find clothes underwear. Step 2: Walk to Bathroom. Step 3: Take off clothes shirt. Step 4: Scrub shower. Step 5: Pour shampoo into hair. Step 6: Wipe shampoo. Step 7: Wipe hair. Step 8: Wash body. Step 9: Find Towel. Step 10: Put on clothes underwear.
D.2 PREDICTED PROCEDURAL PLANS
More qualitative examples of final predicted procedural plans that are randomly selected are provided. Table 14 show random samples on the original dataset. Table 15 show random samples on the counterfactual datasets with the Intermediate Step intervention method. And Table 16 shows random
samples on the counterfactual RobotHow with the Initial Configuration and Final Goal intervention methods.
Model Program RobotHow Task: Play Games
Human Step 1: Walk to bedroom.Step 2: Walk to video game console. Step 3: Find video game console.Step 4: Walk to video game console. Step 5: Switch on video game console.Step 6: Find television. Step 7: Switch on television.Step 8: Find video game controller. Step 9: Grab video game controller.Step 10: Find chair. Step 11: Walk to chair.Step 12: Sit on chair
Chain Step 1: Put chef knife on water glass.Step 2: Find sink.
LLMaP Step 1: Walk to bedroom.Step 2: Walk to video game console.Step 3: Find video game console
Model Program RobotHow Initial Configuration Task: Play games in dining room.
Human Step 1: Walk to dining room.Step 2: Walk to video game console. Step 3: Find video game console.Step 4: Walk to video game console. Step 5: Switch on video game console.Step 6: Find television. Step 7: Switch on television.Step 8: Find video game controller. Step 9: Grab video game controller.Step 10: Find chair. Step 11: Walk to chair.Step 12: Sit on chairTask: Play games in dining room. Step 1: Walk to video game controller.Step 2: Put video game controller on diningtable. Step 3: Put boardgame on kitchen table.Step 4: Put boardgame on diningtable.
Chain Step 1: Walk to dining room.Step 2: Walk to sauce pan. Step 3: Pour bottle water into dish bowl.Step 4: Walk to water. Step 5: Walk to carrot.Step 6: Walk to food salt.
E DISCUSSION
E.1 LIMITATIONS
Though pointing out a direction to prompt out actionable knowledge in large-scale pre-trained language models with external commonsense knowledge, the limitations of reasoning long-horizon procedural plan still exist. Existing datasets for procedural planning like WikiHow and RobotHow are all monolingual supporting only English goals and plans. In the future, it is important to expand these datasets or having novel datasets that support multiple languages used across the world. The inherent difference between these languages may also result in different planning strategies in granularity or abstraction levels, which is potentially challenging. In addition, the long-horizon and complex composite tasks still remain challenging for the existing procedural planners.
Above limitations are discussed mainly based on the challenges of procedural planning task. In addition, there are limitations of our implementation that are guided by our causal analysis. First, the coverage of the leveraged external resources is limited, which is common in a knowledge-enhanced system. This may result in the wrong understanding of the task and produce not reasonable procedural plans. For example, the knowledge of the word ”Turking”, which refers to ”The act or process of performing small tasks using the Amazon Mechanical Turk service.” according to Wiktionary, is not covered in the external resources (e.g., ConceptNet). Since our proposed system does not assume specific external resources. It is plausible in the future if we utilize more powerful external resources (e.g., Wiktionary). Second, the hop number and the threshold of the multi-hop retrieval in taskrelevant subgraph sampling is currently a configured hyperparameter. This may result in not ideally constructed prompt. The future work could instead make these hyperparameters learnable on each task domain, and also explore the pros and cons between end-to-end commonsense-infused prompt versus neuro-symbolic constructed prompt.
E.2 FAILURE ANALYSIS
We discuss detailed failure modes and examples with analyses below. For example, the predicted procedural plan on task ”Turking”, which refers to ”The act or process of performing small tasks using the Amazon Mechanical Turk service.” according to Wiktionary. We compare the predicted procedural plan on this task among baselines and our method: (1) The ground truth plan is ”Task: Turking. Step 1: Walk to home office.Step 2: Walk to desk.Step 3: Find chair.Step 4: Sit on chair.Step 5: Find computer.Step 6: Switch on computer” (2) The plan predicted by Chain baseline is empty. (3) The plan predicted by LLMaP baseline is ”Task: Turking. Step 1: Put teddybear on oven.” (4) Our prediction is ”Task: Turking. Step 1: Eat food turkey. Step 2: Drink water. Step 3: Sleep.” We can see that for the ”out-of-knowledge” task, our method also lead failure planning. We assume this is mainly due to the limited knowledge in external resources, as discussed in the Appendix E.1, and this main failure mode can be avoided by introducing larger external resources (e.g, Wiktionary), similar as other knowledge-enriched methods.
E.3 ETHICAL CONSIDERATIONS
We hope to de-bias the procedural planning to avoid misleading either humans or robots with daily life instructions, which may result in unsafe situations. The cultural bias behind these datasets can be a critical issue for future work. As the ground truth planning steps usually reflect the culture shared by the English-speaking group, other cultures may have a completely different practical consideration that leads to different orders of these steps or even novel steps that are not proposed by the LLMs we utilized in this paper. In the future, we will consider cultural bias as a proxy variable so that we could adjust the implicit knowledge from LLM or commonsense from external sources according to the different needs of cultural backgrounds. | 1. What is the main contribution of the paper regarding procedural planning with LLMs and casual models?
2. What are the strengths and weaknesses of the proposed approach, particularly in its complexity and performance compared to baseline planners?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. What are some minor notes and questions raised by the reviewer regarding certain aspects of the paper? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This work proposes an approach for procedural planning with LLMs and casual models. The approach first builds a commonsense casual model from an external knowledge base with adjustments. Then, given a query, it builds a task-relevant subgraph, which is provided as a procedural prompt. Finally, this is translated to admissible actions for the LLM to plan. This is demonstrated on two large planning tasks, RobotHow and WIkiHow, and shown to outperform baseline planners.
Strengths And Weaknesses
The paper is interesting and the topic is very relevant. Planning from LLMs is promising and grounding them in common sense as well as admissible scenes is a core challenge in this area. The approach of building a large knowledge base and leveraging within an LLM is well founded. The two large planning datasets are large and good environments to test in. The results show improvements across the board compared to baselines.
The paper however has a few areas for improvement.
(1) The clarity could be improved. Some sections like Section 3 are quite dense and difficult to parse, particularly Section 3.1. Perhaps an earlier overview could help clarify or explicit running examples. Section 3.1 could use it’s own algorithm block and potentially a zoomed in figure of the computation. Figure 3, though a nice overview, is very dense.
(2) Though the performance of PLAN is stronger than baselines, it is a smaller improvement than I would think given the additional complexity and also somewhat difficult to judge how large of an improvement it is. In many of the metrics it seems PLAN outperforms by a few percentage points or in voting it wins 50% of the time. While I acknowledge that this shows that PLAN is performing better, it isn’t clear from these results that it is worth the vast additional complexity compared to baselines. The authors should add an additional metric similar to executability in Huang et al., showing the actual success rate of these plans, as this is the ultimate metric we care about. A few related questions:
I would be interested to learn more about what the main failure modes are.
Some results, such as Table 1, seem to outperform baselines with larger models. I’m surprised by this as I would think your approach would be particularly important to add structure when LLM’s are more inaccurate. Do you have any intuition on why this might be?
Minor notes:
“The Termination Condition is either reaching the max step t or the matching score is below threshold θ. “ Instead of thresholding, one can compare to an end of statement token’s probability.
Mention in the intro where the common sense external knowledge comes from (though I know it is in Figure 1).
How do you extract entity names from the task name?
“show that pre-trained knowledge (D) in LLMs confounds” What is D here?
“We describe the implementation of such frontdoor adjustment in Section 3.1.” but section 3.1 is 2 pages long, be more specific.
Table 3 bolds 0.433 in the button right, though GPT3 + Chain outperforms it with 0.471
“PLAN surpasses powerful baselines (Chain of Thoughts (Wei et al., 2022) and Zero-shot Planner (Huang et al.)) by large margins on both the original and the counterfactual samples.” Are these baselines powerful? They don’t have external information except their prompts.
Clarity, Quality, Novelty And Reproducibility
See strengths and weaknesses. |
ICLR | Title
Neuro-Symbolic Procedural Planning with Commonsense Prompting
Abstract
Procedural planning aims to implement complex high-level goals by decomposition into simpler low-level steps. Although procedural planning is a basic skill set for humans in daily life, it remains a challenge for large language models (LLMs) that lack a deep understanding of the cause-effect relations in procedures. Previous methods require manual exemplars to acquire procedural knowledge from LLMs in the zero-shot setting. However, such elicited pre-trained knowledge in LLMs induces spurious correlations between goals and steps, impairing the model’s generalization to unseen tasks. In contrast, this paper proposes a neuro-symbolic procedural PLANner (PLAN) that elicits procedural knowledge from the LLMs with commonsense-infused prompting. To mitigate spurious goal-step correlations, we use symbolic program executors on the latent procedural representations to formalize prompts from external knowledge bases as a causal intervention toward the Structural Causal Model of procedural planning. Both automatic and human evaluations on WikiHow and RobotHow show the superiority of PLAN on procedural planning without further training or manual exemplars.1
1 INTRODUCTION
How to make a cup of coffee? As humans, we can easily specify a procedure to solve this task, using our innate ability of commonsense reasoning. However, can we endow machines with the same ability to construct a sequential plan? As depicted in Figure 1, procedural planning (Pearson, 1996; Zhang et al., 2020b; Huang et al., 2022) aims to decompose a high-level goal (Task: Watch TV) into a sequence of temporally extended steps (Procedural Plan: Step at all five time-steps).
We study procedural planning as the conditional text generation problem since it resembles real-world scenarios. Previous approaches (Huang et al., 2022; Ahn et al., 2022) require a small number of carefully written or held-out exemplars to acquire procedural knowledge. However, these manual exemplars evolved from task data are impossible to cover the ever-changing task setups and the flexible dependency relations among goals and steps. In fact, the biased data may cause the model to learn spurious correlations and hinder the model from generalizing well in zero-shot scenarios. Studies in cognitive science show that humans rely on chunking mechanisms (Gobet et al., 2001; Miller, 1956) which turn primitive stimuli into conceptual groups to solve novel and complex problems. Inspired by this, we hypothesize that generalizable procedural planning ability can be achieved by learning cause-effect relations among complex goals and simpler steps using external knowledge.
To reveal the cause-effect relations in procedural planning, we devise a Structural Causal Model (SCM) (Peters et al., 2017), a directed acyclic graph commonly used to describe the causal relationships within a system Pearl (2009). As depicted in Figure 2, the pre-trained knowledge (D) (e.g., TV and living room is highly correlated) in LLMs confounds (D influences T , Si−1 and Si, resulting in spurious correlations) the system to make biased decisions toward an unreasonable step (e.g., Find
1Source code and datasets are publicly available at https://sites.google.com/view/iclr-clap
Television). Thus, we adopt front-door adjustment (definition in Appendix A.3), which utilizes a mediator (Pi) that blocks all directed paths from the cause (T or Si−1) to the effect (Si). In this way, T (or Si−1) affects Si by flowing through indirect paths: T (or Si−1) affects Pi and Pi affects Si. And we can identify the causal effects among goals and steps by investigating the indirect effect (Equation 3), which is computed by multiplying the effect of T (or Si−1) on Pi−1 (Equation 1) with the effect of Pi on Si (Equation 2). With the above front-door adjustment, we can mitigate the spurious correlations (e.g., between ”television” and ”living room”) and thus make reasonable decisions on steps (e.g., Find book). Please refer to A.1 for causal preliminaries (including explanation for SCM, confounder, mediator, spurious correlations), and A.3 for the front-door adjustment definition.
Guided by the above causal analysis of procedural planning, we need to construct the mediator Pi and then intervene on task T and prompt Pi, which is required to compute the conditional probability in Equation3. As depicted in Figure 3, we seek to automatically construct commonsense-infused prompts as the mediator Pi by concatenating the task, previous steps with commonsense knowledge extracted from external resources (e.g., ConceptNet (Speer et al., 2017)). First, we modify the goal input by sampling a task-relevant knowledge subgraph (Stage1 in Section 3.1) to implement interventions on T . Then, we modify the prompt by adapting the edge weight to implement interventions on Pi (Edge-Wise Adoption of Stage2 in Section 3.1). However, directly incorporating knowledge of graph structure into LLMs leads to the loss of the logical order in eliciting procedural knowledge from LLMs. Thus, we apply symbolic executors (Mao et al., 2019; Yi et al., 2018) that execute the sequential mapping program on latent knowledge representations (e.g., the subevent of). In this way, we transit graph structure knowledge into natural language that preserves procedural structure, such as the sequential order of two low-level steps (Symbolic Structuring of Stage2 in Section 3.1). The procedural prompt PG (e.g, “please get the remote control”) is further translated into admissible one P̂G (e.g., “grab remote control”) from available steps in a certain domain (RobotHow or WikiHow in our case). Finally, we utilize the commonsense-infused prompt P̂G to control the generation of procedural plans in LLMs in a zero-shot setting (Section 3.2).
We conducted experiments on RobotHow (Puig et al., 2018) and WikiHow (Koupaee & Wang, 2018) under original and counterfactual situations. Our major contributions can be summarized as:
• We develop the first causal framework for procedural planning by 1) defining a temporally extended Structural Causal Model and 2) resolving spurious correlation between high-level goals and low-level steps via front-door adjustment with a prompt-based mediator. • We propose a neuro-symbolic approach to construct commonsense-infused prompts for LLMs to tackle the procedural planning task without manual exemplars or further training. • Extensive evaluations show the superiority of PLAN in terms of reasoning about the causeeffect relations among goals and steps and achieving promising planning ability.
2 EXTERNAL KNOWLEDGE MATTERS IN PROCEDURAL PLANNING
As depicted in Figure 1, procedural planning requires generating the Plan (e.g., Step 1: Walk to the living room.) conditioned on the Task (e.g., Watch TV). We first describe the problem definition
and then show why external knowledge matters in procedural planning through the lens of causality. Finally, we show how we elicit procedural ability from the Large Language Models (LLMs).
2.1 PROBLEM DEFINITION
Given the high-level task T (e.g. watch television in the living room) sampled from a task domain MT (e.g. RobotHow), a procedural planner aims to decompose it into lower-level temporally extended steps ST = {S1, ..., Si|Si ∈ S̄}. There exists certain admissible plans S̄, which is a fixed set constrained by the task domain MT (e.g., the affordance of the interacted objects). The plan Si at timestep i is generated as π(Si|T, S0:i−1).
2.2 A CAUSAL LOOK AT PROCEDURE PLANNING WITH LLMS
We seek to empower the LLMs with the ability to reason cause-effect relations in procedural planning. Thus, we devise a causal framework by first defining a Structural Causal Model (SCM) of procedural planning in Figure 2. The SCM describes the temporal dynamics and procedural cause-effect relationship. Our causal assumption in SCM indicates that there is a backdoor path from task to step, which must be blocked with front-door adjustment. Therefore, we model the input prompt as a mediator which is created from external knowledge. More specifically, we define our Full Temporal Causal Graph as in Figure 2a, which is an unrolled Structural Causal Model (SCM) for sequential decision-making. Our goal is to identify the causal relations between the attended task T and plan procedures ST = {S1, S2, . . .} from LLMs. Initially, there are direct paths T → Si and Sk → Si, k < i because Si relies on the LLM attended task entities and previous accomplished steps. D is an unobserved confounder from learned knowledge during pre-training. D builds a backdoor path between T and Si and misguides the LLMs to attend to false entities to generate the next step (see Fig. 2b). Note that D is unobservable as we directly adopt the LLM without knowing the pre-training data. To mitigate the spurious correlation, we then introduce a mediator Pi for each Si as shown in Figure 2a. To achieve our front-door adjustment, we inject external knowledge into LLMs with a neuro-symbolic approach by adopting three stages described in Section 3.1.
3 OUR APPROACH
Although LLMs have strong general language intelligence, they still perform poorly in reasoning the cause-effect relations in procedural plans due to a lack of daily life experience. We propose to elicit the unbiased procedural planning knowledge from the LLMs using the created commonsense-infused Prompt P as π(Si|T, S0:i−1, P ). Figure 3 and Algorithm 1 depict how PLAN tackles the procedural
planning in a five-stage manner. We illustrate the commonsense-infused prompt construction (the first three stages) in Section 3.1 and planning with LLMs (the last stage) in Section 3.2.
3.1 COMMONSENSE-INFUSED PROMPT CONSTRUCTION
Overview Inspired by the causal analysis in Section 2.2, we propose to construct commonsenseinfused Prompt P that helps reveal the cause-effect relations among the goals and steps during procedural planning within 3 stages: 1) Stage1 sample a subgraph Gs from the external knowledge base G by extracting task(T )-relevant nodes. 2) Stage2 adapt the edge weight Ew in Gs and apply symbolic structuring to get the admissible knowledge prompt P̂G. 3) Stage3 acquire the temporal order by temporally aggregated the prompt Pi with previous steps S0:i−1.
Stage1:Task-Relevant Knowledge Subgraph Sampling First, we investigate the causal effect T → Pi and Si−1 → Pi (Figure 2). Si is a collider that blocks the association between D and Pi in the path T ← D → Si ← Pi. Let πi denote π(·|Pi−1) that represent the probability density function conditioned on Pi−1. Since there is no backdoor path for T → Pi and similarly for Si−1 → Pi, we simply have the conditional probability after applying do-operators:
πi(Pi = p|do(T )) = πi(Pi = p|T ), πi(Pi = p|do(Si−1)) = πi(Pi = p|Si−1) (1)
We achieve the do-operation in a prompting way by modifying the goal input so that the model attends to the task-relevant entities. To implement, we use NLTK to tokenize and pos tag the task text T . Then we use the noun (e.g. television), noun phrases (e.g. remote control), and verb phrases (e.g. watch television) as the concept node. In this way, the task name T is Semantically Parsed into the Concept Set TE . Each concept e ∈ TE is used as a query for sampling the H-hop task-relevant subgraph Gs ⊆ Ne×Rs×Ne from the external knowledge base G ⊆ N ×R×N , whereN andR represent the number of concept nodes and commonsense relations respectively. When extracting Gs, we keep the triplets with relation type in the household domain (e.g., AtLocation, UsedFor) and filter out ones in the linguistic domain (e.g., DistinctFrom, DerivedFrom) for the procedural planning task. Ne is maintained in a set of top-k task-relevant nodes using the weight of each Re, which is updated with edge-wise adaption in Stage2.
Stage2:Edge-Wise Adaption and Symbolic Structuring Second, we need to find the causal effect for Pi → Si. Since the path Pi ← T ← D → Si contains a backdoor from Pi to Si, we cannot rely on the conditional probability. Instead, we intervene on Pi using do-operator to cut off D → T :
πi(Si|do(Pi = p)) = ∑ t,s πi(Si|p, T = t, Si−1 = s)πi(T = t, Si−1 = s)
= ∑ t,s πi(Si|p, T = t, Si−1 = s)πi(Si−1 = s|T = t)πi(T = t) (2)
The retrieved concept-centered graph has multiple edges representing various relationships with other actions/entities. Therefore, the summation over intervened T can be achieved by incorporating these edges into the prompt. For instance, “living room” can be “walked to” and “used for reading” while “book” can locate in “living room” and “bedroom”. Similarly, we extrapolate over the edges for i − 1 hops to aggregate the intervened Si, i.e. P (Si−1 = s|T = t). Directly ranking the retrieved nodes Ne with the annotated weight (Ew) in the external knowledge base will result in a spurious correlation. Because such retrieved local subgraphs tend to capture the task-invariant concept nodes as the causal factors. To mitigate this, we propose to adapt the weight of each triplet (Edge-wise Adaption). The adapted weight is the addition of the original edge weight and the cosine similarity between the tail node embedding nEtail of the edge Re and the task embedding vtask as: Êw ← Ew + cosine(nEtail ,vtask). The embeddings are projected from the node text and task name using the sentence-transformer (Reimers & Gurevych, 2019). The nodes Ne are finally retrieved by ranking the adapted weight Êw. To better track the utilized external knowledge during inference, we construct the task-dependent commonsense prompt with a Symbolic Executor (Symbolic Structuring) guided by the relation type of each triplet in Gs with the adapted edge weight beyond threshold θe. Specifically, the Symbolic Executor acquires the neural information of each natural language node and executes the sequential mapping program by sampling the operation Op from the Symbolic Rule Set R according to the edge relation type. The Symbolic Rule Set R is obtained by mapping the description of the relations (e.g., AtLocation represent ‘A is a typical location for B, or A is the inherent location of B. Some instances of this would be considered meronyms in WordNet.’) in the external knowledge graph (e.g., ConceptNet) to symbolic operations (e.g., Op AtLocation). For instance, the AtLocation edge samples the operation Op AtLocation from R, which takes the commonsense relation of the triplet from Gs as the parameters to query the procedural concept output given the natural language meaning of the linked nodes (e.g., go to the location of Start Node Of(re) in this case). Similarly, Op UsedFor may refer to ”go to find End Node Of(re) and use it for Start Node Of(re)”. And operators Op HasSubevent and Op HasPrerequisite will recursively navigate the subgraph Gs. After navigating the subgraph, we linearize the transformed triplets as the Procedural Prompt PG, which is then translated to Admissible Knowledge Prompt P̂G by the Translation Language Model LMT .
Stage3:Temporally-Extended Aggregation To acquire temporal order in the procedure, we obtain the Prompt P at timestep i with the aggregation of task T , history steps S0:i−1 and current external knowledge P̂G. The underlying causal mechanism is a combination of Eq. 1 and Eq. 2:
πi(Si|do(T ), do(Si−1)) = ∑ p πi(Si|do(Pi = p))πi(p|do(T ), do(Si−1))
= ∑ p πi(p|T ) ∑ t,s πi(Si|p, T = t, Si−1 = s)πi(T = t, Si−1 = s) (3)
The adjustment and marginalization in Eq. 3 is achieved in the input space by forming the Procedural Prompt PG that allows the LLM to attend on the causal entities instead of the highly correlated ones for the next step generation. The LLM can reason over the most relevant edges to link the concepts with the task entities as a context. The prompts from knowledge bases are independent of the pre-training data distribution so that Pi is independent of D and satisfies the front-door criterion. Please refer to Appendix A.3 and Figure 4 for the simplification of our structural causal model.
3.2 PROCEDURAL PLANNING WITH LARGE LANGUAGE MODELS
Stage4:Semantic Generation The external knowledge is further concatenated with the goal input (T ) as the initial prompt. Given the prompt, the language model Generation LMG ∈ {PAR, PAE} (e.g., GPT3, BART) generates the next sentence, and the most confident prediction is then appended to previous prompts. The Termination Condition is either reaching the max step t or the matching score is below threshold θ. The joint probabilities of auto-regressive (PAR) and auto-encoder (PAE) model is factorized as:
πAR(x) = n∏ i=1 p(sn|P̂G, s1:n−1, T ), πAE(x) = n∏ i=1 p(sn|P̂G, {s1:n−1, [MASK]}, T ) (4)
where P̂G represent the commonsense knowledge and T represent the task name.
Algorithm 1 Neuro-Symbolic Procedural Planning using Commonsense-Infused Prompting
Require:
Task Sample T , Admissible Step Set S, External Knowledge Graph G; Language Model for Generation LMG and Translation LMT , Symbolic Rule Set R;
Ensure: 1: [Stage1] Semantically parse T into entity set TE ; 2: Maintain top-k task-relevant nodes Ne in TE ; 3: Retrieve subgraph Gs ⊆ Ne ×Rs ×Ne from G ⊆ N ×R×N for each e ∈ TE ; 4: [Stage2] Edge-wise adaption as Êw ← Ew + cosine(nEtail , vtask) and re-rank Ne in TE ; 5: Map the description text of the relationsRs in Gs as Symbolic Rule Set R; 6: Construct procedural prompt PG by verbalizing the re-weighted Gs using R; 7: Translate PG in Admissible Knowledge Prompt P̂G = LMT (PG);
Temporally-extended zero-shot inference for Procedural Plan ST = {S1, ..., Si}: 8: for each timestep i do 9: [Stage3] Aggregate Prompt Pi ← [T ;S0:i−1; P̂G];
10: [Stage4] and [Stage5] Si = LMT (LMG(Pi)); 11: Update Procedural Plan ST ← Si; 12: end for
Stage5:Admissible Step Translation To ensure that the generated procedural plans are grounded to the environment, we should avoid producing the steps that are inadmissible (e.g. Toast the table). In other words, the generated steps should be fully constrained to the admissible composite of action and object in a certain task domain. Thus previous works (Huang et al., 2022; Ahn et al., 2022) have explored using the model (which is LMT in our case) to score a step selected from a fixed set of available options, instead of directly sampling from the output distributions of the language model (which is LMG in our case). Specifically, we match the generated step by LMG to the most similar admissible step in the embedding space encoded by the Translation Language Model LMT . Following (Huang et al., 2022), we utilize a Sentence-Transformer (Reimers & Gurevych, 2019) to calculate the cosine similarity as π(si|x) = LMT (LMG(x)), which translates LMG(x) into the admissible step si ∈ S̄ that is the closest in the embedding space measured by the cosine similarity.
3.3 COUNTERFACTUAL PROCEDURAL DATA CONSTRUCTION
To investigate the counterfactual reasoning ability, we design three families of intervention methods: 1) Initial Configuration: intervene in the initial configuration, such as the location for implementing the task. 2) Intermediate Step, randomly select one step from the ground truth program as an additional constraint of implementing the task and append it to the task name for generating the procedural plan. 3) Final Goal, intervene the task goal as the composite of another randomly sampled task. Table 5 in the Appendix summarizes the category and description. The counterfactual dataset construction details and post-intervention examples are provided in Appendix B.2.
4 EXPERIMENTS
4.1 PROCEDURAL PLANNING SETUP
Datasets We conduct zero-shot experiments on two datasets with procedural information, WikiHow (collected following (Koupaee & Wang, 2018)) and RobotHow (Puig et al., 2018) without training. WikiHow is a large-scale text summarization dataset that is constructed from a human-written knowledge base, involving procedural tasks that spans various topics. We utilize “how to” title as the task names and the summarized headlines as the steps. RobotHow is a large knowledge base of common household tasks collected in the VirtualHome (Puig et al., 2018) simulator. The dataset contains the programs with high-level task names and low-level steps. MT is composed of 292 and 2000 distinct tasks from RobotHow and WikiHow respectively. Human evaluations use randomly sampled 50 task examples for each dataset. Automatic evaluations use 150 and 1000 task examples randomly sampled from RobotHow and WikiHow respectively. Please refer to Appendix B.1 and Appendix B.2 for dataset details.
Baselines We compare our approach with three vanilla generative pre-trained language models (BART, GPT2, and GPT3) and two powerful generation baselines (Zero-shot Planner (Huang et al., 2022) noted as “LLMaP” and Chain of Thought (Wei et al., 2022) noted as “Chain”). More method and configuration details of the models can be found in Appendix B.3 and Appendix B.4.
Metrics We ask human annotators on the Amazon Mechanical Turk platform to rate model performance on two aspects: 1) Coverage: depicts which set of steps can better complete the target task (captures semantic completeness). 2) Order: depicts which sequence covers more steps that are necessary to complete the target task (captures sequential order correctness). In addition, we use Sentence-BLEU (S-BLEU) (Papineni et al., 2002), BERTScore (Zhang* et al., 2020), ROUGE1 (Lin, 2004) and Word Mover’s Distance (WMD) (Kusner et al., 2015) as automatic evaluation metrics. These metrics are used to compute the semantic scores between the annotated programs and the predictions. Details of the crowdsourcing human evaluation can be found in Appendix C.1.
4.2 HUMAN EVALUATION RESULTS WITH COVERAGE AND ORDER METRIC
Each example is rated by 3 crowdsourcing annotators. For the Win-Lose Comparison, we ask the human rater to choose between ours and the baseline LLMaP (Huang et al., 2022). Averaged results reported in Table 1 show that our PLAN is more frequently rated as better for both coverage and order metrics, outperforming baselines over the winning ratio by 21% in coverage and 26% in order, across two datasets. We report the average results of Human Ratings with 5-point Likert scale in Table 2. The consistent performance boost of PLAN indicates the superiority of injecting external commonsense knowledge into the procedural planning task. The performance drop of LLMaP and Chain in the counterfactual setting indicates the vulnerability of fixed holdout knowledge and the pre-defined manual exemplars in causal procedural planning. Please refer to Appendix C.1 for the crowdsourcing human evaluation interface details. Table 3 shows two examples for Qualitative Comparison. More examples can be found in Appendix D.
4.3 AUTOMATICALLY MEASURING THE PROCEDURAL PLANNING
Main Results Table 4 summarizes The automatic evaluation results. PLAN achieves the best results regardless of the architecture of the language model architecture, either autoregressive or autoencoder based. The performance gain of “LLMaP” over “Chain” may probably be due to direct exposure to the holdout task from the dataset. While the “Chain” baseline still outperforms the vanilla baseline that only takes the high-level task name as the prompt. Note that the annotated program is not the
only solution, thus these automatic metrics provide limited absolute performance information. Details for the correlation between automatic metrics and human evaluation can be found in Section 4.5.
Effects of Edge-wise Adaption and Symbolic Program Execution The variant “w/o Adaption” maintains the top-k task-specific nodes ranked by the annotated weight EW in the external knowledge base G without adaption. The variant “w/o Symbolic” directly takes the extracted concept nodes from external knowledge base as prompt. The performance drop of these two variants in Table 4 with significance test in Appendix C.2 demonstrate the importance of adaption and symbolic modules.
Effects of the Large Language Model Architecture We use GPT2 and GPT3 as autoregressive architecture and BART (Lewis et al., 2020) as autoencoder architecture. The autoregressive architecture achieves better results than the autoencoder one. Since the pre-training objective of autoregressivebased GPT is to predict the next token given the previous input tokens. We assume the performance gain of GPT is due to a smaller gap between the objective of pre-training and procedural planning.
Level of Complexity We show report results that use the test set which is separated into several buckets according to the number of steps in the procedural planning task. The step number reflects the difficulty of the task. In Table 7 and Table 8 in Appendix C.2, we show that the averaged performance gain of PLAN over the baselines are consistent or more significant in more complicated procedural planning settings. This indicates the superiority of PLAN in solving long-horizon tasks.
4.4 RESULTS ON COUNTERFACTUAL TASK SAMPLES
We apply Initial Configuration, Intermediate Step, Final Goal interventions on RobotHow and Intermediate Step on WikiHow. Human evaluations under counterfactual setting are summarized in Table 1 and Table 2. PLAN consistently outperforms baselines by a large margin and experiences
a much smaller performance drop compared with the powerful baselines when switching to the counterfactual setting. We assume it’s due to the biased knowledge of the holdout examples and manual exemplars utilized in the baselines, which are vulnerable to counterfactual samples. Automatic evaluations on counterfactual RobotHow are summarized in Table 13 in Appendix C.2. Aligned with human evaluations, PLAN achieves the best performance. The overall poor performance in Final Goal category indicates the challenge for long-horizon and composite procedural planning. While the overall better performance in Intermediate Step category benefits from the intermediate guidance.
4.5 CORRELATION BETWEEN AUTOMATIC AND HUMAN EVALUATION
We evaluate segment-level Pearson Correlation between human and automatic metrics. We observe that BERTScore has a moderate correlation to the human coverage score and WMD has a moderate correlation to the human order score, with 23.3% and 32.3% respectively. Similar to the prior findings (Xu et al., 2021), n-gram-based metrics (Sentence-BLEU and ROUGE) have a relatively weaker correlation to the human coverage score, with a Pearson correlation of 16.4% and 21.1%. Overall, our automatic and human evaluation scores are consistent with the main claim of this paper. However, human evaluation is still irreplaceable for procedural planning at the current stage.
5 RELATED WORK
Procedural Planning Learning to generate procedural plan (Zhang et al., 2020a; Lyu et al., 2021; Zhang et al., 2020b; Chang et al., 2020; Wu et al., 2022; Huang et al., 2022) is important for embodied agentTellex et al. (2011); Jansen (2020); Ahn et al. (2022) and conversational assistants (Ilievski et al., 2018; Yang et al., 2022). Previous work views procedural script learning as a structured form of commonsense knowledge Gupta et al. (2004); Regneri et al. (2010); Wanzare et al. (2016), while more recent work strengthens its association with the changing environments for executable action planning Puig et al. (2018); Shridhar et al. (2020). Some works (Sun et al., 2020; Zhao et al., 2021) explore to utilize human written programs to precisely specify tasks. Our method tackles the problem with aware of cause-effect by utilizing commonsense-infused prompts via a neuro-symbolic approach (Mao et al., 2019; Nye et al., 2021; Yi et al., 2018) for zero-shot procedural planning.
Causality for Language Generation The integration of causality and machine learning has been an intriguing topic for many problems Pearl (2009); Schölkopf (2022). Previous studies focusing on causal inference for natural language understanding Chen et al. (2020); Keith et al. (2020); WoodDoughty et al. (2018) and generating counterfactual text representations Feder et al. (2021). Weber et al. (2020) proposes an intervention method for script learning. However, these methods cannot be directly applied to procedural planning which requires a formal structure. Our method is based on mediation analysis VanderWeele (2015) and causal intervention Pearl (2009); Peters et al. (2017).
Prompt for Large Language Model There is an emerging interest in using prompts to extract knowledge from large language models (Chen et al., 2022; Le Scao & Rush, 2021; Su et al., 2022; Ye et al., 2022; Zhou et al., 2022; Kojima et al., 2022). Cao et al. (2022) treats the prompt as a cause of the task-specific predictor and investigates biases in prompt-based probing evaluations. Chain of thought Wei et al. (2022) discovers that LLM can perform better on reasoning tasks when the prompt is designed as a series of short sentences that mimic the reasoning process of humans.
6 CONCLUSION AND FUTURE WORK
Procedural planning is a newly emerged research area of great importance to various applications, such as household robots and virtual assistants. We propose a neuro-symbolic procedural PLANner (PLAN) with commonsense-infused prompts elicited from the external knowledge base to solve the procedural planning problem in a zero-shot manner without human annotated exemplars. Experiments show the effectiveness of our proposed PLAN under both origin and counterfactual settings, indicating the capability of mitigating spurious correlation by injecting external knowledge in LLMs. Though, procedural planning over long-horizon and composite tasks remains challenging. And exploring multimodal learning and developing human-aligned evaluation metrics are promising future directions in this area.
7 ETHICAL STATEMENT
Given the limited diversified cultural background of the dataset we are using from RobotHow and WikiHow, we assume our results may be biased toward a single cultural background. For instance, given the task ”make breakfeast”, it should take multi-culture into consideration to generate the procedural plans.
8 REPRODUCIBILITY STATEMENT
We provide more data samples and qualitative samples in supplemental materials. In addition, we provide our code implementation at https://anonymous.4open.science/r/PLANNER-7B24 to reproduce our experiments. The Preprocess folder provides the utils to construct the data. The Evaluation folder provides the code for automatic and human evaluation tools. The Planning folder contains the main code for our approach and reproduced planners for procedural planning. The Visualization folder provides the code we use to visualize in the environment.
ACKNOWLEDGMENTS
The research was sponsored by the U.S. Army Research Office and was accomplished under Contract Number W911NF-19-D-0001 for the Institute for Collaborative Biotechnologies. This work was also supported by the National Science Foundation award #2048122. We thank the Robert N.Noyce Trust for their generous gift to the University of California via the Noyce initiative. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
Appendix
Table of Contents
A SCM Theoretical Details 16
A.1 Causal Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 A.2 The Backdoor Adjustment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 A.3 The Front-door Adjustment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
B Implementation Details 19 B.1 Original Dataset Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 B.2 Counterfactual Dataset and Experiment Details . . . . . . . . . . . . . . . . . . 20 B.3 Method Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 B.4 Hyperparameter Search and Configuration Deicision . . . . . . . . . . . . . . . 21 B.5 Computation and Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
C Evaluation Details 21
C.1 Crowdsourcing Human Evaluation . . . . . . . . . . . . . . . . . . . . . . . . 21 C.2 More Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
D Qualitative Examples 29
D.1 Intermediate Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 D.2 Predicted Procedural Plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
E Discussion 34
E.1 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 E.2 Failure Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 E.3 Ethical considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
A SCM THEORETICAL DETAILS
A.1 CAUSAL PRELIMINARIES
The Structural Causal Model (SCM) is a directed acyclic graph (DAG) to describe the causal relationships within a system Pearl (2009). In this paper, we refer to the unrolled SCM along the time dimension as the full temporal causal graph, while the rolled-up version is also called the causal summary graph Peters et al. (2017). In an SCM, if the variable D is a cause of both T and Si, then it is called a confounder. A confounder opens up a backdoor path and causes a spurious correlation between T and Si. The backdoor path is defined as the remaining path between T and Si when all the arrows pointing out of T are removed. Therefore, T ← D → Si is a backdoor path. For our SCM with mediator Pi shown in Figure 4c (same as Figure 2b) from the main paper, there is no backdoor path between T and {Pi, Si−1} because only D → T is left after removing outgoing arrows of T . On the other hand, there is a backdoor path between Pi and Si, i.e. Pi ← T ← D → Si so that Pi indirectly affects the observation of Si through {T, Si−1} and D. The mediator is the variable added between treatment variable (the cause T and Si−1 in our case) and treatment variable (the effect Si in our case), and thus blocks all directed path from the cause to effect ( (Zhang et al., 2016)). The spurious correlations happens when two variables are statistically related but not causally related because of a third variable influences these two variables at the same time or the correlation is coincidental.
To identify the true causal effect between X and Y , we aim to estimate the conditional π(Y |do(X)) after intervention with the do-operator. The do-operator is to break the backdoor path by setting X to a fixed value independent of Z. Then the path Z → X can be removed to eliminate the backdoor paths. In practice, the backdoor adjustment and front-door adjustment are two fundamental methods to implement interventions and obtain the conditional π(Y |do(X)). Clarity of the Definition As a language prompt, Pi inherits the content from Pi−1 and thus can be detached from steps before Si−1 for simplicity.
Causal Intervention There are two types of operation to control the confounding bias: the backdoor adjustment and the front-door adjustment (Pearl, 2009). The backdoor adjustment is intractable in our case because it requires the prior distribution of the confounding variables. On the other hand, we can construct an input prompt as a mediator Pi for T → Si and Si−1 → Si. Then the front-door adjustment applies a two-step do-operation to mitigate bias by investigating P → Si (Pearl, 2009). Specifically, we construct the prompt mediator Pi using techniques illustrated in Section 2.2.
The pre-trained knowledge (D) in LLMs confounds language models to make biased decisions toward an unreasonable action. Since the confounder is unobservable, intervention techniques such as back-door (definition in Appendix A.2) adjustment (Hu & Li, 2021; Weber et al., 2020; Yue et al., 2020) are not applicable in our SCM. Instead, we build a mediator and implement it as a commonsense-infused prompt. Through the mediator, we can identify causal effects among goals and steps by investigating the indirect effect from the goals, which is essentially the front-door adjustment (definition in Appendix A.3) in causality (Pearl, 2009).
A.2 THE BACKDOOR ADJUSTMENT
The backdoor adjustment is one way to realize the intervention do(T = t) by considering the conditional probability over the existing data distribution with observed confounder D. Let πi denote π(·|Pi−1) that represent the probability density function conditioned on Pi−1. It calculates the average causal effects by considering all stratums of the dataset:
πi(Si|do(T )) = ∑ d πi(Si|T,D = d)πi(D = d) (5)
However, for LLMs, the pretraining data is usually unobservable and has been transformed as knowledge incorporated into the hidden space. Therefore, we are not able to directly apply the backdoor adjustment.
A.3 THE FRONT-DOOR ADJUSTMENT
The front-door adjustment is another technique to apply intervention by introducing a mediator Pi when the confounder is unobservable. As is explained in Section 2.2 from the main paper, the front-door adjustment is equivalent to two consecutive do-operations on task T and prompt Pi. We first investigate the generation of S1 and then expand it to St.
Timestep i = 1 As is shown in Figure 4a, since there is no preceding steps, the first step generation involves D, T and P1 only. Similar to the proof in Section 2.2 from the main paper, we have:
πi(S1|do(T )) = ∑ p πi(S1|do(P1 = p))πi(p|do(T ))
= ∑ p πi(p|T ) ∑ t πi(Si|p, T = t)πi(T = t) (6)
By adding intervention to T , we make the value of do(T = t) independent of the confounder D at the beginning. The backdoor path through D → T is eliminated as a result.
Timestep i > 1 As is shown in Figure 2a from the main paper, we model the mediator P1 as an effect of three variables, T , Pi−1 and Si−1. The first step of our front-door adjustment is to apply the do-operator on the three variables and observe the change in Pi as explained in Section 2.2 from the main paper. Since there are no backdoor paths between Pi and these variables, we have the probability after intervention equal to the conditional probability without intervention:
πi(Pi = p|do(T )) = πi(Pi = p|T ) (7) πi(Pi = p|do(Pi−1)) = πi(Pi = p|Pi−1) (8) πi(Pi = p|do(Si−1)) = πi(Pi = p|Si−1) (9)
The second step is to apply do-operator on Pi and then identify the causal effect as: πi(Si|do(Pi)) = ∑ t,p′,s ( πi(Si|Pi, T = t, Pi−1 = p′, Si−1 = s)
πi(T = t, Pi−1 = p ′, Si−1 = s) ) (10) Combining Equation7-9 and Equation 10, we have the front-door adjustment. Note that there are three backdoor paths from each of the variables T , Pi−1, and Si−1, as is shown in Figure 4b (drawn
in blue, red and purple). More importantly, the one through T , i.e. Pi ← T ← D → Si (the blue path in Figure 4b) and the one through Pi−1, i.e. Pi ← Pi−1 ← T ← D → Si (the red path in Figure 4b) shares the same subpath. The intervention on the task T breaks the backdoor paths for both T and Pi−1. Therefore, we have our front-door adjustment as
πi(Si|do(Si−1),do(Pi−1), do(T )) (11) = ∑ p πi(Si|do(Pi = p))πi(p|do(Si−1), do(Pi−1), do(T )) (12)
= ∑ p πi(Si|do(Pi = p))πi(p|do(Si−1), Pi−1, do(T )) (13)
= ∑ p πi(Si|do(Pi = p))πi(p|do(Si−1), do(T )) (14)
= ∑ p πi(p|Si−1, T ) ∑ s,t πi(Si|p, Si−1 = s, T = t)πi(Si−1 = s, T = t) (15)
= πi(Si|do(Si−1), do(T )) (16)
We have Equation 13 because of the intervention on T and Rule 2 (Pearl, 1995), Equation 14 because of Rule 1 (Pearl, 1995). After simplification based on Equation 12-16, we get the SCM at timestep i > 1 in Figure 4c. This is an equivalent SCM after eliminating Pi−1 in Figure 4b. The reason we could eliminate Pi−1 is as follows. We follow a common method of constructing temporally-extended prompt, which is to append the prediction at previous timesteps to the prompt at current timestep. In our case, the PG,i is the same as PG,i−1, thus Pi inherit part of the content from Pi−1, the change only depend on the Si−1. Thus Pi−1 and Si−2 are fixed, and there is no need to predict Pi−1 at timestep i again. In this way, we simplify the causal graph in Figure 4b to the one in Figure 4c. In summary, we define and simplify the causal graph based on the temporal-extended property of our prompt construction (Pi inherit the content from Pi−1). We end up with Equation 14-16 which is shown as Equation 3 in Section 2.2 from the main paper.
B IMPLEMENTATION DETAILS
B.1 ORIGINAL DATASET DETAILS
RobotHow This dataset is Attribution-NonCommercial-ShareAlike 4.0 International Creative Commons License. We evaluate the inference of 150 tasks by random selection from the dataset. Each program contains the task name, task description and steps. We use the task name and sequence of steps as our input and output references.Each step is a composition of [Action], [Object] and [Number]. For example, the sequence of steps of the task ”Watch TV” are: 1. [Walk] <TELEVISION> (1) 2. [SwitchOn] <TELEVISION> (1) 3. [Walk] <SOFA> (1) 4. [Sit] <SOFA> (1) 5. [Watch] <TELEVISION> (1).
WikiHow This dataset2 is under an Attribution-Noncommercial-Share Alike 3.0 Creative Commons License. And the text content is free to modify, republish and share. We evaluate the inference of 1000 tasks by random selection from the dataset. The admissible action space and interaction object space are more complex than the programs in RobotHow. And there is no fixed ”[Action] ¡Object¿ (Number)” form of each step. For each article, it contains the title, the bold headlines and text. We utilize the title and headlines as our task name and steps respectively.
External Knowledge Base For the external knowledge base, we utilize ConceptNet to leverage commonsense reasoning ability to help ground language generation in goal-guided procedural text generation. ConceptNet (Speer et al., 2017) captures commonsense knowledge explicitly with triplets of (head node, relation, end node). It contains 799, 273 nodes and 2, 487, 810 edges that represent both symmetric and asymmetric relations. Specifically, the core relations we utilized are Synonym, AtLocation, CapableOf, Causes, CausesDesire, HasPrerequisite, HasSubevent, and UsedFor. Since we are looking at the commonsense knowledge in house-holding tasks, so we filter out the relations (/r/DistinctFrom, /r/DerivedFrom, /r/SymbolOf, /r/EtymologicallyRelatedTo, /r/EtymologicallyDerivedFrom) that are related to the linguistic.
2https://www.wikihow.com
B.2 COUNTERFACTUAL DATASET AND EXPERIMENT DETAILS
Table 6 show the examples that compare the original program and the counterfactual program of each intervention method are also provided. Specifically, for Initial Configuration, we randomly append the location to a given task name to constrain the location of completing the task. The steps are prepended with the initial step ”walk to ¡Location¿”. For Intermediate Step, we randomly sampled a step from the task-specific program and append it to the task name to constrain the way to implement a given task. For Final Goal, we randomly combine two tasks by combining both the task names and the programs to construct a set of long-horizon composite tasks.
We conduct counterfactual experiments by applying randomly selected intervention methods over RobotHow. And we only apply the Intermediate Step intervention method over WikiHow due to the loose configuration requirement and the long text of the WikiHow contents. Note that the performance gain of PLAN under the counterfactual setting mainly comes from the additional guidance of the task introduced from the Intermediate Step intervention method. However, the baselines mostly experience performance drops due to the limited annotated exemplars. PLAN consistently outperforms baselines by a large margin, indicating its superiority under the counterfactual setting.
B.3 METHOD DETAILS
The existing formalization of the procedural planning task can be mainly categorized as 1) sequential choice making (Lyu et al., 2021; Wu et al., 2022; Zhang et al., 2020a;b), which reasons about the next step from the options given, the task, and previous steps; 2) conditioned generation (Huang et al., 2022; Ahn et al., 2022), which generates the temporally extended plans to implement the task. We study the procedural planning task as the conditioned generation problem (Huang et al., 2022; Ahn et al., 2022) since it resembles real-world scenarios.
Baselines LLMaP propose a procedure to extract temporally extended plans from large pre-trained language models. Chain explores manually creating exemplars that mimic the reasoning process
and uses them to prompt large language models for reasoning tasks. To compare with Chain on the procedural planning task, we manually generate exemplars that contain the chain of thought for 1% of the inference task programs. Note that for the BART language model, we use BART-large version. And we use the 1.5 billion parameter GPT-2 (aka gpt2-xl). For the translation model LMT , we use sentence-transformers (RoBERTa-large). All these models are released by HuggingFace. In addition, our experiments with GPT3 (davinci) use OpenAI API (May, 2022).
External Knowledge Graph Conceptnet5 define a set of 34 relations (3). Within the relations we consider in the procedural planning task, the averaged sampling time of subgraph sampling is 0.03576 milliseconds per task program.
B.4 HYPERPARAMETER SEARCH AND CONFIGURATION DEICISION
We perform a hyperparameter search for all evaluated methods for the following hyperparameters.
• The confidence threshold θ, which terminate the generation when below it, is searched in {0, 0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8}.
• The steps horizon, which constrains the maximal number of procedural planning steps, is searched in {10, 20, 40}.
• The number of hops for retrieving the subgraph from the external knowledge base is searched in {1, 2, 3}.
• The ratio of maximal concepts to the length of the task name is searched in {1, 2, 3}. • The cosine similarity threshold for keeping the task-specific concept is searched in {0.4, 0.6, 0.8}.
• The edge weight threshold θe is searched in {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8}. • The top-k task-specific nodes value is searched in {1, 5, 10, 15, 20, 25, 50, 100}.
The configurations used in the experiments are: θ=0.7, 20 step horizon, 3 hops, 3 ratio of concepts to task length, cosine similarity threshold 0.4, θe=0.6 and k=10.
We empirically choose the hop number H as 3 considering both the input length limit of the LLMs and the fact that 3-hop contains reasonable relevant information in practice (Zhang et al., 2022).
B.5 COMPUTATION AND RESOURCES
We use one single NVIDIA A100 GPU Server for all the experiments. Since there is no training in our zero-shot settings, the computation is only used for the inference stage of the experiments.
C EVALUATION DETAILS
C.1 CROWDSOURCING HUMAN EVALUATION
We conduct all the human evaluations (rating and win-lose comparison) on Amazon Mechanical Turk platform. Each example is rated by 3 annotators. We ask Amazon Mechanical Turk workers, for every assignment, to evaluate the quality of the provided low-level steps given the high-level task description. For the Win-Lose Comparison, they were asked to choose one from the two provided model generated results by 1:the first one is better, 2:equal and 3:the second one is better. For the Human Ratings, they were asked to score each sample with 5-point Likert scale. This process does not involve collecting any personal information. And we manually check no offensive content is produced by the models.
The assignment layout templates for workers are shown in Figure 7 and Figure 6. Specifically, we evaluate randomly selected 50 task examples from each dataset (RobotHow and WikiHow) under all the settings (standard and counterfactual). We only collect the examples that the workers read the instructions carefully by checking whether they give 1 score for the empty program as a sanity check. The hourly wage paid to participants is estimated $9. And the total amount spent on participant
3https://github.com/commonsense/conceptnet5/wiki/Relations
compensation is $1296. The details of the Human Intelligence Tasks process are described in the following sections.
C.1.1 WIN-LOSE COMPARISON
During the process of Human Intelligence Tasks, the workers are shown the following instructions: Read the given task and the sequence of steps, determine which set of steps can better complete the target task. In other words, can the task be decomposed into these steps? Please consider the sequential order of the steps.
Then the program to be evaluated is provided as:
Question Task: Study
Sequence 1:: Step 1: Walk to textbook Step 2: Read book Step 3: Walk to book
Sequence 2:: Step 1: Walk to home office Step 2: Find desk
Finally, the workers are asked to score the program by following the instructions below: Select an option: 1 - Sequence 1 is better; 2 - Tie; 3 - Sequence 2 is better
The above example is to evaluate the order metric, for the coverage metric, the same process are conducted, except for the instructions are: Read the given task and the sequence of steps, and determine which sequence covers more steps that are necessary to complete the target task. Please ignore the sequential order of the steps.
C.1.2 HUMAN RATINGS
Similar as the Win-Lose Comparison Human Intelligence Tasks, the workers are shown the following instructions: For every question below, determine whether the task can be completed in any reasonable scenario using the provided steps (Please consider the sequential order of the steps.). You could directly give the lowest score (1) for the empty steps. In other words, can the task be decomposed into these steps? (Please consider the sequential order of the steps.)
Then the program to be evaluated is provided as:
Question Task: Write an email
Sequence of Steps: Step 1: Walk to home office Step 2: Walk to computer Step 3: Find computer Step 4: Turn to computer Step 5: Look at computer Step 6: Walk to computer Step 7: Find chair Step 8: Sit on chair Step 9: Find keyboard Step 10: Grab keyboard Step 11: Find mouse Step 12: Grab mouse Step 13: Type on keyboard
Finally, the workers are asked to score the program by following the instructions below: Use the slider below to indicate how much you agree with the following statement (1 = Strongly disagree, 5 = Strongly agree). If ”sequence of steps” are blank, please directly choose 1 (lowest score). The task can be completed in any reasonable scenario using the provided steps. [SLIDER PROVIDED
HERE]
The above example is to evaluate the order metric, for the coverage metric, the same process is conducted, except for the instructions are: For every question below, determine whether the task can be completed in any reasonable scenario using the provided steps (Please ignore the sequential order of the steps.). You could directly give the lowest score (1) for the empty steps. In other words, can the task be decomposed into these steps? (Please ignore the sequential order of the steps.)
C.2 MORE RESULTS
Significance Test We provide paired-t test (p¡0.05) statistics results for Table 2. On RobotHow, our PLAN significantly outperforms all baselines on Original-Order(BART) and CounterfactualCoverage(GPT2). On WikiHow, our PLAN significantly outperforms all baselines on OriginalCoverage(BART, GPT2), Counterfactual-Coverage(BART, GPT2), and Counterfactual-Order(BART). For the coverage metric under the counterfactual setting, the human-provided program is not significantly better than our PLAN.
We also conduct the paired-t test (p¡0.05) statistics results over the variant “w/o Adaption” and “w/o Symbolic”. Compared with the full model PLAN, the variants experienced a statistically significant
performance drop. Especially on BERTScore-f1, the p-value is 8.884e−13 and 1.4e−8 respectively. This further confirms the importance of the modules.
Results on GPT-3 In addition, we conduct experiments with GPT-3 (davinci version) using OpenAI API. We showcase the comparison in Table 9 and Table 10.
Motivation of Evaluation Metrics Since the nature of the procedural planning task can be opendomain in that the golden plans may not be unique. This leads to the challenge that common automatic metrics proposed in natural language task are not perfect to evaluate procedural planning. The same observations of such challenge to directly judge the system using automatic metrics are discussed in LLMaP(Huang et al., 2022) as well. We assume that the human evaluation on Coverage and Order can reflect how well the procedural plans are close to human annotated program, because the human annotators are required to determine whether the task can be completed in any reasonable scenario using the procedural plans explicitly. Thus we provide both the automatic evaluation and human evaluation on two aspects Coverage and Order, with description in the Metrics paragraph in Section 4.1.
Evaluation on Success Rate Metric To make human evaluations more intuitive, we provide an additional Success Rate metric to show whether the procedural plans can successfully implement the task, which focus more on the success rate instead of the coverage or the order of the plans. We show the Success Rate evaluations on the baselines and our method in Table 11. The assignment layout template for workers is shown in Figure 8.
More Ablation To verify the contribution of the first translation language model LMT that translates the knowledge prompt PG into admissible one P̂G, we conduct an additional ablation experiment by simply removing the first LMT and replacing P̂G with PG to prompt the LLM for procedural planning. We provide results with comparisons to other ablations in Table 12.
Results on Counterfactual Task Samples We show automatic evaluation results on counterfactual RobotHow in Table 13.
D QUALITATIVE EXAMPLES
D.1 INTERMEDIATE OUTPUT
We provide running examples with intermediate output for each module in the following paragraph. First, we show the intermediate output of input task T , the subgraph Gs depicted in the tuple of the start node, relation type, tail node and edge weight, the knowledge prompt PG and the translated one P̂G as below:
• Input task T : Take shower.
• Human-annotated Plan Reference: Step 1: Walk to bathroom. Step 2: Walk to clothes dress. Step 3: Find clothes dress. Step 4: Put off clothes dress. Step 5: Find shower. Step 6: Enter shower. Step 7: Find soap. Step 8: Grab soap. Step 9: Scrub soap. Step 10: Put back soap. Step 11: Leave shower. Step 12: Find towel. Step 13: Grab towel. Step 14: Wipe towel. Step 15: Find clothes dress. Step 16: Put on clothes dress.
• Task-relevant subgraph Gs(Nhead, Re, Ntail, Ew): (take a shower, HasLastSubevent, dry off, 6.0); (bathe, HasLastSubevent, dry off, 6.0); (take a shower, HasPrerequisite, take out your clothes, 4.47); (take a shower, HasSubevent, get clean, 4.47); (take a shower, HasPrerequisite, take your clothes off, 3.46); (go to a party, HasPrerequisite, take a shower, 2.82); (play lacrosse, HasLastSubevent, take a shower, 2.82); (get clean, HasPrerequisite, take a shower, 2.82); (take a shower, MotivatedByGoal, wash your hair, 2.82); (play sports, HasLastSubevent, take a shower, 2.82); (go to the hairdresser, HasPrerequisite, take a shower, 2.82); (take a shower, HasPrerequisite, turn on the water, 2.0); (have a bath, HasLastSubevent, dry off, 2.0); (get wet, HasSubevent, dry off, 2.0); (become more clean, HasLastSubevent, dry off, 2.0); (take a shower, HasSubevent, wash your hair, 2.0); (take a shower, HasLastSubevent, turn off the water, 2.0); (become more clean, HasLastSubevent, dry off, 2.0); take a shower, HasLastSubevent, put your clothes on, 1.0); (take a shower, HasSubevent, use shampoo, 1.0); (take a shower, HasSubevent, wash behind your ears, 1.0); (take a shower, HasSubevent, wash your body, 1.0); (take a shower, HasPrerequisite, go to the bathroom, 1.0); (take a shower, HasPrerequisite, go to the bathroom and undress, 1.0); (take a shower, HasPrerequisite, step into the shower, 1.0); (take a shower, HasPrerequisite, soap up, 1.0); (take a shower, HasPrerequisite, bring some soap, 1.0); (take a shower, HasPrerequisite, bring some shampoo, 1.0); (take a shower, HasLastSubevent, towel yourself off, 1.0); (take a shower, HasPrerequisite, remember to take your towel, 1.0);
• Knowledge prompt PG: Step: take out your clothes. Step: set clean clothes. Step: go to the bathroom. Step: go to the bathroom and undress. Step: take your clothes off. Step: turn on the water. Step: bring some soap. Step: bring some shampoo. Step: remember to take your towel. Step: get clean. Step: wash your hair. Step: use shampoo. Step: wash behind your ears. Step: wash your body. Step: turn off the water. Step: dry off. Step: become more clean. Step: put your clothes on.
• Translated knowledge prompt P̂G: Step: find clothes underwear. Step: wash shower. Step: walk to bathroom. Step: look at shower. Step: take off clothes underwear. Step: wash shower. Step: find soap. Step: grab soap. Step: scrub shower. Step: find shampoo. Step: grab shampoo. Step: find towel. Step: wipe shower. Step: wash hair. Step: wipe shampoo. Step: scrub shower. Step: wash body. Step: switch off washing machine. Step: scrub shower. Step: wipe shower. Step: put on clothes underwear.
• Generated plan ST : Step 1: Find clothes underwear. Step 2: Walk to Bathroom. Step 3: Take off clothes shirt. Step 4: Scrub shower. Step 5: Pour shampoo into hair. Step 6: Wipe shampoo. Step 7: Wipe hair. Step 8: Wash body. Step 9: Find Towel. Step 10: Put on clothes underwear.
D.2 PREDICTED PROCEDURAL PLANS
More qualitative examples of final predicted procedural plans that are randomly selected are provided. Table 14 show random samples on the original dataset. Table 15 show random samples on the counterfactual datasets with the Intermediate Step intervention method. And Table 16 shows random
samples on the counterfactual RobotHow with the Initial Configuration and Final Goal intervention methods.
Model Program RobotHow Task: Play Games
Human Step 1: Walk to bedroom.Step 2: Walk to video game console. Step 3: Find video game console.Step 4: Walk to video game console. Step 5: Switch on video game console.Step 6: Find television. Step 7: Switch on television.Step 8: Find video game controller. Step 9: Grab video game controller.Step 10: Find chair. Step 11: Walk to chair.Step 12: Sit on chair
Chain Step 1: Put chef knife on water glass.Step 2: Find sink.
LLMaP Step 1: Walk to bedroom.Step 2: Walk to video game console.Step 3: Find video game console
Model Program RobotHow Initial Configuration Task: Play games in dining room.
Human Step 1: Walk to dining room.Step 2: Walk to video game console. Step 3: Find video game console.Step 4: Walk to video game console. Step 5: Switch on video game console.Step 6: Find television. Step 7: Switch on television.Step 8: Find video game controller. Step 9: Grab video game controller.Step 10: Find chair. Step 11: Walk to chair.Step 12: Sit on chairTask: Play games in dining room. Step 1: Walk to video game controller.Step 2: Put video game controller on diningtable. Step 3: Put boardgame on kitchen table.Step 4: Put boardgame on diningtable.
Chain Step 1: Walk to dining room.Step 2: Walk to sauce pan. Step 3: Pour bottle water into dish bowl.Step 4: Walk to water. Step 5: Walk to carrot.Step 6: Walk to food salt.
E DISCUSSION
E.1 LIMITATIONS
Though pointing out a direction to prompt out actionable knowledge in large-scale pre-trained language models with external commonsense knowledge, the limitations of reasoning long-horizon procedural plan still exist. Existing datasets for procedural planning like WikiHow and RobotHow are all monolingual supporting only English goals and plans. In the future, it is important to expand these datasets or having novel datasets that support multiple languages used across the world. The inherent difference between these languages may also result in different planning strategies in granularity or abstraction levels, which is potentially challenging. In addition, the long-horizon and complex composite tasks still remain challenging for the existing procedural planners.
Above limitations are discussed mainly based on the challenges of procedural planning task. In addition, there are limitations of our implementation that are guided by our causal analysis. First, the coverage of the leveraged external resources is limited, which is common in a knowledge-enhanced system. This may result in the wrong understanding of the task and produce not reasonable procedural plans. For example, the knowledge of the word ”Turking”, which refers to ”The act or process of performing small tasks using the Amazon Mechanical Turk service.” according to Wiktionary, is not covered in the external resources (e.g., ConceptNet). Since our proposed system does not assume specific external resources. It is plausible in the future if we utilize more powerful external resources (e.g., Wiktionary). Second, the hop number and the threshold of the multi-hop retrieval in taskrelevant subgraph sampling is currently a configured hyperparameter. This may result in not ideally constructed prompt. The future work could instead make these hyperparameters learnable on each task domain, and also explore the pros and cons between end-to-end commonsense-infused prompt versus neuro-symbolic constructed prompt.
E.2 FAILURE ANALYSIS
We discuss detailed failure modes and examples with analyses below. For example, the predicted procedural plan on task ”Turking”, which refers to ”The act or process of performing small tasks using the Amazon Mechanical Turk service.” according to Wiktionary. We compare the predicted procedural plan on this task among baselines and our method: (1) The ground truth plan is ”Task: Turking. Step 1: Walk to home office.Step 2: Walk to desk.Step 3: Find chair.Step 4: Sit on chair.Step 5: Find computer.Step 6: Switch on computer” (2) The plan predicted by Chain baseline is empty. (3) The plan predicted by LLMaP baseline is ”Task: Turking. Step 1: Put teddybear on oven.” (4) Our prediction is ”Task: Turking. Step 1: Eat food turkey. Step 2: Drink water. Step 3: Sleep.” We can see that for the ”out-of-knowledge” task, our method also lead failure planning. We assume this is mainly due to the limited knowledge in external resources, as discussed in the Appendix E.1, and this main failure mode can be avoided by introducing larger external resources (e.g, Wiktionary), similar as other knowledge-enriched methods.
E.3 ETHICAL CONSIDERATIONS
We hope to de-bias the procedural planning to avoid misleading either humans or robots with daily life instructions, which may result in unsafe situations. The cultural bias behind these datasets can be a critical issue for future work. As the ground truth planning steps usually reflect the culture shared by the English-speaking group, other cultures may have a completely different practical consideration that leads to different orders of these steps or even novel steps that are not proposed by the LLMs we utilized in this paper. In the future, we will consider cultural bias as a proxy variable so that we could adjust the implicit knowledge from LLM or commonsense from external sources according to the different needs of cultural backgrounds. | 1. What is the focus and contribution of the paper on procedural planning using large language models?
2. What are the strengths of the proposed approach, particularly in its ability to attend to causal entities?
3. What are the weaknesses of the paper regarding its discussion on failure cases and intermediate outputs?
4. Do you have any questions regarding the symbolic rule set used in the study or the ablation experiment suggested by the reviewer?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
Summary:
Existing large language models (LLMs) require manual exemplars to acquire procedural planning knowledge in the zero-shot setting.
The paper proposed a neuro-symbolic procedural PLANner (PLAN) with commonsense-infused prompts elicited from an external knowledge base (i.e., ConceptNet) to solve the pure-language-based procedural planning problem in a zero-shot manner.
Human and automatic evaluations on WikiHow and RobotHow show the superiority of the proposed PLAN over the prior methods on procedural planning.
Strengths And Weaknesses
Strength:
Solid and reasonable idea: Based on the observation that due to potentially biased pre-training data, pre-trained knowledge in LLMs may confound the model to make wrong decisions when asking the model to generate a procedural plan of a task, the authors proposed to apply the frontdoor adjustment in causality to build a mediator and implemented it as a commonsense-infused prompt. The prompt obtained from their neuro-symbolic-based method allows the LLM to attend to the causal entities instead of the highly correlated ones for the next step generation.
Strong performance: the proposed method PLAN outperformed two recent SOTA methods LLMaP (“Language models as zero- shot planners”) and Chain (“Chain of thought prompting”) statistically significantly.
Quite thorough experimental study and analysis: for the experiments, the authors utilized several evaluation methods including human evaluations, two datasets, and several pre-trained LLMs.
The paper is well written and organized.
Weaknesses:
Discussion on the failure cases is currently missing. In addition, the generated procedural plan of the proposed method was shown, but it would be interesting and useful for readers to see the exact intermediate outputs of the proposed framework given an actual task from the evaluation dataset, e.g.,
G
s
,
P
G
,
P
^
G
, etc. In this way, readers may have a better understanding of the current capabilities of each module of the proposed framework.
Some minor issues, e.g.: (1) “Note that in the path T →D→Si ←Pi, Si is a collider…” (second line of the “Stage1” paragraph), should it be “T←D”? (2) It would be good to refer readers to the appendix for definitions of backdoor path, frontdoor adjustment, etc. (3) How is the Symbolic Rule Set
R
obtained? (4) One more ablation experiment: what if removing the first Translation
L
M
T
and replacing
P
^
G
with
P
G
? (5) Suggesting one relevant work: https://arxiv.org/abs/2205.11916 (manual exemplars not required).
Clarity, Quality, Novelty And Reproducibility
Clarity: some descriptions in the paper are currently not clear enough.
Quality: the quality of the paper is good.
Novelty: the contributions of the paper are novel.
Reproducibility: Some additional details are required for one to reproduce the results in the paper. Partial code was provided but not the executable full code. |
ICLR | Title
Neuro-Symbolic Procedural Planning with Commonsense Prompting
Abstract
Procedural planning aims to implement complex high-level goals by decomposition into simpler low-level steps. Although procedural planning is a basic skill set for humans in daily life, it remains a challenge for large language models (LLMs) that lack a deep understanding of the cause-effect relations in procedures. Previous methods require manual exemplars to acquire procedural knowledge from LLMs in the zero-shot setting. However, such elicited pre-trained knowledge in LLMs induces spurious correlations between goals and steps, impairing the model’s generalization to unseen tasks. In contrast, this paper proposes a neuro-symbolic procedural PLANner (PLAN) that elicits procedural knowledge from the LLMs with commonsense-infused prompting. To mitigate spurious goal-step correlations, we use symbolic program executors on the latent procedural representations to formalize prompts from external knowledge bases as a causal intervention toward the Structural Causal Model of procedural planning. Both automatic and human evaluations on WikiHow and RobotHow show the superiority of PLAN on procedural planning without further training or manual exemplars.1
1 INTRODUCTION
How to make a cup of coffee? As humans, we can easily specify a procedure to solve this task, using our innate ability of commonsense reasoning. However, can we endow machines with the same ability to construct a sequential plan? As depicted in Figure 1, procedural planning (Pearson, 1996; Zhang et al., 2020b; Huang et al., 2022) aims to decompose a high-level goal (Task: Watch TV) into a sequence of temporally extended steps (Procedural Plan: Step at all five time-steps).
We study procedural planning as the conditional text generation problem since it resembles real-world scenarios. Previous approaches (Huang et al., 2022; Ahn et al., 2022) require a small number of carefully written or held-out exemplars to acquire procedural knowledge. However, these manual exemplars evolved from task data are impossible to cover the ever-changing task setups and the flexible dependency relations among goals and steps. In fact, the biased data may cause the model to learn spurious correlations and hinder the model from generalizing well in zero-shot scenarios. Studies in cognitive science show that humans rely on chunking mechanisms (Gobet et al., 2001; Miller, 1956) which turn primitive stimuli into conceptual groups to solve novel and complex problems. Inspired by this, we hypothesize that generalizable procedural planning ability can be achieved by learning cause-effect relations among complex goals and simpler steps using external knowledge.
To reveal the cause-effect relations in procedural planning, we devise a Structural Causal Model (SCM) (Peters et al., 2017), a directed acyclic graph commonly used to describe the causal relationships within a system Pearl (2009). As depicted in Figure 2, the pre-trained knowledge (D) (e.g., TV and living room is highly correlated) in LLMs confounds (D influences T , Si−1 and Si, resulting in spurious correlations) the system to make biased decisions toward an unreasonable step (e.g., Find
1Source code and datasets are publicly available at https://sites.google.com/view/iclr-clap
Television). Thus, we adopt front-door adjustment (definition in Appendix A.3), which utilizes a mediator (Pi) that blocks all directed paths from the cause (T or Si−1) to the effect (Si). In this way, T (or Si−1) affects Si by flowing through indirect paths: T (or Si−1) affects Pi and Pi affects Si. And we can identify the causal effects among goals and steps by investigating the indirect effect (Equation 3), which is computed by multiplying the effect of T (or Si−1) on Pi−1 (Equation 1) with the effect of Pi on Si (Equation 2). With the above front-door adjustment, we can mitigate the spurious correlations (e.g., between ”television” and ”living room”) and thus make reasonable decisions on steps (e.g., Find book). Please refer to A.1 for causal preliminaries (including explanation for SCM, confounder, mediator, spurious correlations), and A.3 for the front-door adjustment definition.
Guided by the above causal analysis of procedural planning, we need to construct the mediator Pi and then intervene on task T and prompt Pi, which is required to compute the conditional probability in Equation3. As depicted in Figure 3, we seek to automatically construct commonsense-infused prompts as the mediator Pi by concatenating the task, previous steps with commonsense knowledge extracted from external resources (e.g., ConceptNet (Speer et al., 2017)). First, we modify the goal input by sampling a task-relevant knowledge subgraph (Stage1 in Section 3.1) to implement interventions on T . Then, we modify the prompt by adapting the edge weight to implement interventions on Pi (Edge-Wise Adoption of Stage2 in Section 3.1). However, directly incorporating knowledge of graph structure into LLMs leads to the loss of the logical order in eliciting procedural knowledge from LLMs. Thus, we apply symbolic executors (Mao et al., 2019; Yi et al., 2018) that execute the sequential mapping program on latent knowledge representations (e.g., the subevent of). In this way, we transit graph structure knowledge into natural language that preserves procedural structure, such as the sequential order of two low-level steps (Symbolic Structuring of Stage2 in Section 3.1). The procedural prompt PG (e.g, “please get the remote control”) is further translated into admissible one P̂G (e.g., “grab remote control”) from available steps in a certain domain (RobotHow or WikiHow in our case). Finally, we utilize the commonsense-infused prompt P̂G to control the generation of procedural plans in LLMs in a zero-shot setting (Section 3.2).
We conducted experiments on RobotHow (Puig et al., 2018) and WikiHow (Koupaee & Wang, 2018) under original and counterfactual situations. Our major contributions can be summarized as:
• We develop the first causal framework for procedural planning by 1) defining a temporally extended Structural Causal Model and 2) resolving spurious correlation between high-level goals and low-level steps via front-door adjustment with a prompt-based mediator. • We propose a neuro-symbolic approach to construct commonsense-infused prompts for LLMs to tackle the procedural planning task without manual exemplars or further training. • Extensive evaluations show the superiority of PLAN in terms of reasoning about the causeeffect relations among goals and steps and achieving promising planning ability.
2 EXTERNAL KNOWLEDGE MATTERS IN PROCEDURAL PLANNING
As depicted in Figure 1, procedural planning requires generating the Plan (e.g., Step 1: Walk to the living room.) conditioned on the Task (e.g., Watch TV). We first describe the problem definition
and then show why external knowledge matters in procedural planning through the lens of causality. Finally, we show how we elicit procedural ability from the Large Language Models (LLMs).
2.1 PROBLEM DEFINITION
Given the high-level task T (e.g. watch television in the living room) sampled from a task domain MT (e.g. RobotHow), a procedural planner aims to decompose it into lower-level temporally extended steps ST = {S1, ..., Si|Si ∈ S̄}. There exists certain admissible plans S̄, which is a fixed set constrained by the task domain MT (e.g., the affordance of the interacted objects). The plan Si at timestep i is generated as π(Si|T, S0:i−1).
2.2 A CAUSAL LOOK AT PROCEDURE PLANNING WITH LLMS
We seek to empower the LLMs with the ability to reason cause-effect relations in procedural planning. Thus, we devise a causal framework by first defining a Structural Causal Model (SCM) of procedural planning in Figure 2. The SCM describes the temporal dynamics and procedural cause-effect relationship. Our causal assumption in SCM indicates that there is a backdoor path from task to step, which must be blocked with front-door adjustment. Therefore, we model the input prompt as a mediator which is created from external knowledge. More specifically, we define our Full Temporal Causal Graph as in Figure 2a, which is an unrolled Structural Causal Model (SCM) for sequential decision-making. Our goal is to identify the causal relations between the attended task T and plan procedures ST = {S1, S2, . . .} from LLMs. Initially, there are direct paths T → Si and Sk → Si, k < i because Si relies on the LLM attended task entities and previous accomplished steps. D is an unobserved confounder from learned knowledge during pre-training. D builds a backdoor path between T and Si and misguides the LLMs to attend to false entities to generate the next step (see Fig. 2b). Note that D is unobservable as we directly adopt the LLM without knowing the pre-training data. To mitigate the spurious correlation, we then introduce a mediator Pi for each Si as shown in Figure 2a. To achieve our front-door adjustment, we inject external knowledge into LLMs with a neuro-symbolic approach by adopting three stages described in Section 3.1.
3 OUR APPROACH
Although LLMs have strong general language intelligence, they still perform poorly in reasoning the cause-effect relations in procedural plans due to a lack of daily life experience. We propose to elicit the unbiased procedural planning knowledge from the LLMs using the created commonsense-infused Prompt P as π(Si|T, S0:i−1, P ). Figure 3 and Algorithm 1 depict how PLAN tackles the procedural
planning in a five-stage manner. We illustrate the commonsense-infused prompt construction (the first three stages) in Section 3.1 and planning with LLMs (the last stage) in Section 3.2.
3.1 COMMONSENSE-INFUSED PROMPT CONSTRUCTION
Overview Inspired by the causal analysis in Section 2.2, we propose to construct commonsenseinfused Prompt P that helps reveal the cause-effect relations among the goals and steps during procedural planning within 3 stages: 1) Stage1 sample a subgraph Gs from the external knowledge base G by extracting task(T )-relevant nodes. 2) Stage2 adapt the edge weight Ew in Gs and apply symbolic structuring to get the admissible knowledge prompt P̂G. 3) Stage3 acquire the temporal order by temporally aggregated the prompt Pi with previous steps S0:i−1.
Stage1:Task-Relevant Knowledge Subgraph Sampling First, we investigate the causal effect T → Pi and Si−1 → Pi (Figure 2). Si is a collider that blocks the association between D and Pi in the path T ← D → Si ← Pi. Let πi denote π(·|Pi−1) that represent the probability density function conditioned on Pi−1. Since there is no backdoor path for T → Pi and similarly for Si−1 → Pi, we simply have the conditional probability after applying do-operators:
πi(Pi = p|do(T )) = πi(Pi = p|T ), πi(Pi = p|do(Si−1)) = πi(Pi = p|Si−1) (1)
We achieve the do-operation in a prompting way by modifying the goal input so that the model attends to the task-relevant entities. To implement, we use NLTK to tokenize and pos tag the task text T . Then we use the noun (e.g. television), noun phrases (e.g. remote control), and verb phrases (e.g. watch television) as the concept node. In this way, the task name T is Semantically Parsed into the Concept Set TE . Each concept e ∈ TE is used as a query for sampling the H-hop task-relevant subgraph Gs ⊆ Ne×Rs×Ne from the external knowledge base G ⊆ N ×R×N , whereN andR represent the number of concept nodes and commonsense relations respectively. When extracting Gs, we keep the triplets with relation type in the household domain (e.g., AtLocation, UsedFor) and filter out ones in the linguistic domain (e.g., DistinctFrom, DerivedFrom) for the procedural planning task. Ne is maintained in a set of top-k task-relevant nodes using the weight of each Re, which is updated with edge-wise adaption in Stage2.
Stage2:Edge-Wise Adaption and Symbolic Structuring Second, we need to find the causal effect for Pi → Si. Since the path Pi ← T ← D → Si contains a backdoor from Pi to Si, we cannot rely on the conditional probability. Instead, we intervene on Pi using do-operator to cut off D → T :
πi(Si|do(Pi = p)) = ∑ t,s πi(Si|p, T = t, Si−1 = s)πi(T = t, Si−1 = s)
= ∑ t,s πi(Si|p, T = t, Si−1 = s)πi(Si−1 = s|T = t)πi(T = t) (2)
The retrieved concept-centered graph has multiple edges representing various relationships with other actions/entities. Therefore, the summation over intervened T can be achieved by incorporating these edges into the prompt. For instance, “living room” can be “walked to” and “used for reading” while “book” can locate in “living room” and “bedroom”. Similarly, we extrapolate over the edges for i − 1 hops to aggregate the intervened Si, i.e. P (Si−1 = s|T = t). Directly ranking the retrieved nodes Ne with the annotated weight (Ew) in the external knowledge base will result in a spurious correlation. Because such retrieved local subgraphs tend to capture the task-invariant concept nodes as the causal factors. To mitigate this, we propose to adapt the weight of each triplet (Edge-wise Adaption). The adapted weight is the addition of the original edge weight and the cosine similarity between the tail node embedding nEtail of the edge Re and the task embedding vtask as: Êw ← Ew + cosine(nEtail ,vtask). The embeddings are projected from the node text and task name using the sentence-transformer (Reimers & Gurevych, 2019). The nodes Ne are finally retrieved by ranking the adapted weight Êw. To better track the utilized external knowledge during inference, we construct the task-dependent commonsense prompt with a Symbolic Executor (Symbolic Structuring) guided by the relation type of each triplet in Gs with the adapted edge weight beyond threshold θe. Specifically, the Symbolic Executor acquires the neural information of each natural language node and executes the sequential mapping program by sampling the operation Op from the Symbolic Rule Set R according to the edge relation type. The Symbolic Rule Set R is obtained by mapping the description of the relations (e.g., AtLocation represent ‘A is a typical location for B, or A is the inherent location of B. Some instances of this would be considered meronyms in WordNet.’) in the external knowledge graph (e.g., ConceptNet) to symbolic operations (e.g., Op AtLocation). For instance, the AtLocation edge samples the operation Op AtLocation from R, which takes the commonsense relation of the triplet from Gs as the parameters to query the procedural concept output given the natural language meaning of the linked nodes (e.g., go to the location of Start Node Of(re) in this case). Similarly, Op UsedFor may refer to ”go to find End Node Of(re) and use it for Start Node Of(re)”. And operators Op HasSubevent and Op HasPrerequisite will recursively navigate the subgraph Gs. After navigating the subgraph, we linearize the transformed triplets as the Procedural Prompt PG, which is then translated to Admissible Knowledge Prompt P̂G by the Translation Language Model LMT .
Stage3:Temporally-Extended Aggregation To acquire temporal order in the procedure, we obtain the Prompt P at timestep i with the aggregation of task T , history steps S0:i−1 and current external knowledge P̂G. The underlying causal mechanism is a combination of Eq. 1 and Eq. 2:
πi(Si|do(T ), do(Si−1)) = ∑ p πi(Si|do(Pi = p))πi(p|do(T ), do(Si−1))
= ∑ p πi(p|T ) ∑ t,s πi(Si|p, T = t, Si−1 = s)πi(T = t, Si−1 = s) (3)
The adjustment and marginalization in Eq. 3 is achieved in the input space by forming the Procedural Prompt PG that allows the LLM to attend on the causal entities instead of the highly correlated ones for the next step generation. The LLM can reason over the most relevant edges to link the concepts with the task entities as a context. The prompts from knowledge bases are independent of the pre-training data distribution so that Pi is independent of D and satisfies the front-door criterion. Please refer to Appendix A.3 and Figure 4 for the simplification of our structural causal model.
3.2 PROCEDURAL PLANNING WITH LARGE LANGUAGE MODELS
Stage4:Semantic Generation The external knowledge is further concatenated with the goal input (T ) as the initial prompt. Given the prompt, the language model Generation LMG ∈ {PAR, PAE} (e.g., GPT3, BART) generates the next sentence, and the most confident prediction is then appended to previous prompts. The Termination Condition is either reaching the max step t or the matching score is below threshold θ. The joint probabilities of auto-regressive (PAR) and auto-encoder (PAE) model is factorized as:
πAR(x) = n∏ i=1 p(sn|P̂G, s1:n−1, T ), πAE(x) = n∏ i=1 p(sn|P̂G, {s1:n−1, [MASK]}, T ) (4)
where P̂G represent the commonsense knowledge and T represent the task name.
Algorithm 1 Neuro-Symbolic Procedural Planning using Commonsense-Infused Prompting
Require:
Task Sample T , Admissible Step Set S, External Knowledge Graph G; Language Model for Generation LMG and Translation LMT , Symbolic Rule Set R;
Ensure: 1: [Stage1] Semantically parse T into entity set TE ; 2: Maintain top-k task-relevant nodes Ne in TE ; 3: Retrieve subgraph Gs ⊆ Ne ×Rs ×Ne from G ⊆ N ×R×N for each e ∈ TE ; 4: [Stage2] Edge-wise adaption as Êw ← Ew + cosine(nEtail , vtask) and re-rank Ne in TE ; 5: Map the description text of the relationsRs in Gs as Symbolic Rule Set R; 6: Construct procedural prompt PG by verbalizing the re-weighted Gs using R; 7: Translate PG in Admissible Knowledge Prompt P̂G = LMT (PG);
Temporally-extended zero-shot inference for Procedural Plan ST = {S1, ..., Si}: 8: for each timestep i do 9: [Stage3] Aggregate Prompt Pi ← [T ;S0:i−1; P̂G];
10: [Stage4] and [Stage5] Si = LMT (LMG(Pi)); 11: Update Procedural Plan ST ← Si; 12: end for
Stage5:Admissible Step Translation To ensure that the generated procedural plans are grounded to the environment, we should avoid producing the steps that are inadmissible (e.g. Toast the table). In other words, the generated steps should be fully constrained to the admissible composite of action and object in a certain task domain. Thus previous works (Huang et al., 2022; Ahn et al., 2022) have explored using the model (which is LMT in our case) to score a step selected from a fixed set of available options, instead of directly sampling from the output distributions of the language model (which is LMG in our case). Specifically, we match the generated step by LMG to the most similar admissible step in the embedding space encoded by the Translation Language Model LMT . Following (Huang et al., 2022), we utilize a Sentence-Transformer (Reimers & Gurevych, 2019) to calculate the cosine similarity as π(si|x) = LMT (LMG(x)), which translates LMG(x) into the admissible step si ∈ S̄ that is the closest in the embedding space measured by the cosine similarity.
3.3 COUNTERFACTUAL PROCEDURAL DATA CONSTRUCTION
To investigate the counterfactual reasoning ability, we design three families of intervention methods: 1) Initial Configuration: intervene in the initial configuration, such as the location for implementing the task. 2) Intermediate Step, randomly select one step from the ground truth program as an additional constraint of implementing the task and append it to the task name for generating the procedural plan. 3) Final Goal, intervene the task goal as the composite of another randomly sampled task. Table 5 in the Appendix summarizes the category and description. The counterfactual dataset construction details and post-intervention examples are provided in Appendix B.2.
4 EXPERIMENTS
4.1 PROCEDURAL PLANNING SETUP
Datasets We conduct zero-shot experiments on two datasets with procedural information, WikiHow (collected following (Koupaee & Wang, 2018)) and RobotHow (Puig et al., 2018) without training. WikiHow is a large-scale text summarization dataset that is constructed from a human-written knowledge base, involving procedural tasks that spans various topics. We utilize “how to” title as the task names and the summarized headlines as the steps. RobotHow is a large knowledge base of common household tasks collected in the VirtualHome (Puig et al., 2018) simulator. The dataset contains the programs with high-level task names and low-level steps. MT is composed of 292 and 2000 distinct tasks from RobotHow and WikiHow respectively. Human evaluations use randomly sampled 50 task examples for each dataset. Automatic evaluations use 150 and 1000 task examples randomly sampled from RobotHow and WikiHow respectively. Please refer to Appendix B.1 and Appendix B.2 for dataset details.
Baselines We compare our approach with three vanilla generative pre-trained language models (BART, GPT2, and GPT3) and two powerful generation baselines (Zero-shot Planner (Huang et al., 2022) noted as “LLMaP” and Chain of Thought (Wei et al., 2022) noted as “Chain”). More method and configuration details of the models can be found in Appendix B.3 and Appendix B.4.
Metrics We ask human annotators on the Amazon Mechanical Turk platform to rate model performance on two aspects: 1) Coverage: depicts which set of steps can better complete the target task (captures semantic completeness). 2) Order: depicts which sequence covers more steps that are necessary to complete the target task (captures sequential order correctness). In addition, we use Sentence-BLEU (S-BLEU) (Papineni et al., 2002), BERTScore (Zhang* et al., 2020), ROUGE1 (Lin, 2004) and Word Mover’s Distance (WMD) (Kusner et al., 2015) as automatic evaluation metrics. These metrics are used to compute the semantic scores between the annotated programs and the predictions. Details of the crowdsourcing human evaluation can be found in Appendix C.1.
4.2 HUMAN EVALUATION RESULTS WITH COVERAGE AND ORDER METRIC
Each example is rated by 3 crowdsourcing annotators. For the Win-Lose Comparison, we ask the human rater to choose between ours and the baseline LLMaP (Huang et al., 2022). Averaged results reported in Table 1 show that our PLAN is more frequently rated as better for both coverage and order metrics, outperforming baselines over the winning ratio by 21% in coverage and 26% in order, across two datasets. We report the average results of Human Ratings with 5-point Likert scale in Table 2. The consistent performance boost of PLAN indicates the superiority of injecting external commonsense knowledge into the procedural planning task. The performance drop of LLMaP and Chain in the counterfactual setting indicates the vulnerability of fixed holdout knowledge and the pre-defined manual exemplars in causal procedural planning. Please refer to Appendix C.1 for the crowdsourcing human evaluation interface details. Table 3 shows two examples for Qualitative Comparison. More examples can be found in Appendix D.
4.3 AUTOMATICALLY MEASURING THE PROCEDURAL PLANNING
Main Results Table 4 summarizes The automatic evaluation results. PLAN achieves the best results regardless of the architecture of the language model architecture, either autoregressive or autoencoder based. The performance gain of “LLMaP” over “Chain” may probably be due to direct exposure to the holdout task from the dataset. While the “Chain” baseline still outperforms the vanilla baseline that only takes the high-level task name as the prompt. Note that the annotated program is not the
only solution, thus these automatic metrics provide limited absolute performance information. Details for the correlation between automatic metrics and human evaluation can be found in Section 4.5.
Effects of Edge-wise Adaption and Symbolic Program Execution The variant “w/o Adaption” maintains the top-k task-specific nodes ranked by the annotated weight EW in the external knowledge base G without adaption. The variant “w/o Symbolic” directly takes the extracted concept nodes from external knowledge base as prompt. The performance drop of these two variants in Table 4 with significance test in Appendix C.2 demonstrate the importance of adaption and symbolic modules.
Effects of the Large Language Model Architecture We use GPT2 and GPT3 as autoregressive architecture and BART (Lewis et al., 2020) as autoencoder architecture. The autoregressive architecture achieves better results than the autoencoder one. Since the pre-training objective of autoregressivebased GPT is to predict the next token given the previous input tokens. We assume the performance gain of GPT is due to a smaller gap between the objective of pre-training and procedural planning.
Level of Complexity We show report results that use the test set which is separated into several buckets according to the number of steps in the procedural planning task. The step number reflects the difficulty of the task. In Table 7 and Table 8 in Appendix C.2, we show that the averaged performance gain of PLAN over the baselines are consistent or more significant in more complicated procedural planning settings. This indicates the superiority of PLAN in solving long-horizon tasks.
4.4 RESULTS ON COUNTERFACTUAL TASK SAMPLES
We apply Initial Configuration, Intermediate Step, Final Goal interventions on RobotHow and Intermediate Step on WikiHow. Human evaluations under counterfactual setting are summarized in Table 1 and Table 2. PLAN consistently outperforms baselines by a large margin and experiences
a much smaller performance drop compared with the powerful baselines when switching to the counterfactual setting. We assume it’s due to the biased knowledge of the holdout examples and manual exemplars utilized in the baselines, which are vulnerable to counterfactual samples. Automatic evaluations on counterfactual RobotHow are summarized in Table 13 in Appendix C.2. Aligned with human evaluations, PLAN achieves the best performance. The overall poor performance in Final Goal category indicates the challenge for long-horizon and composite procedural planning. While the overall better performance in Intermediate Step category benefits from the intermediate guidance.
4.5 CORRELATION BETWEEN AUTOMATIC AND HUMAN EVALUATION
We evaluate segment-level Pearson Correlation between human and automatic metrics. We observe that BERTScore has a moderate correlation to the human coverage score and WMD has a moderate correlation to the human order score, with 23.3% and 32.3% respectively. Similar to the prior findings (Xu et al., 2021), n-gram-based metrics (Sentence-BLEU and ROUGE) have a relatively weaker correlation to the human coverage score, with a Pearson correlation of 16.4% and 21.1%. Overall, our automatic and human evaluation scores are consistent with the main claim of this paper. However, human evaluation is still irreplaceable for procedural planning at the current stage.
5 RELATED WORK
Procedural Planning Learning to generate procedural plan (Zhang et al., 2020a; Lyu et al., 2021; Zhang et al., 2020b; Chang et al., 2020; Wu et al., 2022; Huang et al., 2022) is important for embodied agentTellex et al. (2011); Jansen (2020); Ahn et al. (2022) and conversational assistants (Ilievski et al., 2018; Yang et al., 2022). Previous work views procedural script learning as a structured form of commonsense knowledge Gupta et al. (2004); Regneri et al. (2010); Wanzare et al. (2016), while more recent work strengthens its association with the changing environments for executable action planning Puig et al. (2018); Shridhar et al. (2020). Some works (Sun et al., 2020; Zhao et al., 2021) explore to utilize human written programs to precisely specify tasks. Our method tackles the problem with aware of cause-effect by utilizing commonsense-infused prompts via a neuro-symbolic approach (Mao et al., 2019; Nye et al., 2021; Yi et al., 2018) for zero-shot procedural planning.
Causality for Language Generation The integration of causality and machine learning has been an intriguing topic for many problems Pearl (2009); Schölkopf (2022). Previous studies focusing on causal inference for natural language understanding Chen et al. (2020); Keith et al. (2020); WoodDoughty et al. (2018) and generating counterfactual text representations Feder et al. (2021). Weber et al. (2020) proposes an intervention method for script learning. However, these methods cannot be directly applied to procedural planning which requires a formal structure. Our method is based on mediation analysis VanderWeele (2015) and causal intervention Pearl (2009); Peters et al. (2017).
Prompt for Large Language Model There is an emerging interest in using prompts to extract knowledge from large language models (Chen et al., 2022; Le Scao & Rush, 2021; Su et al., 2022; Ye et al., 2022; Zhou et al., 2022; Kojima et al., 2022). Cao et al. (2022) treats the prompt as a cause of the task-specific predictor and investigates biases in prompt-based probing evaluations. Chain of thought Wei et al. (2022) discovers that LLM can perform better on reasoning tasks when the prompt is designed as a series of short sentences that mimic the reasoning process of humans.
6 CONCLUSION AND FUTURE WORK
Procedural planning is a newly emerged research area of great importance to various applications, such as household robots and virtual assistants. We propose a neuro-symbolic procedural PLANner (PLAN) with commonsense-infused prompts elicited from the external knowledge base to solve the procedural planning problem in a zero-shot manner without human annotated exemplars. Experiments show the effectiveness of our proposed PLAN under both origin and counterfactual settings, indicating the capability of mitigating spurious correlation by injecting external knowledge in LLMs. Though, procedural planning over long-horizon and composite tasks remains challenging. And exploring multimodal learning and developing human-aligned evaluation metrics are promising future directions in this area.
7 ETHICAL STATEMENT
Given the limited diversified cultural background of the dataset we are using from RobotHow and WikiHow, we assume our results may be biased toward a single cultural background. For instance, given the task ”make breakfeast”, it should take multi-culture into consideration to generate the procedural plans.
8 REPRODUCIBILITY STATEMENT
We provide more data samples and qualitative samples in supplemental materials. In addition, we provide our code implementation at https://anonymous.4open.science/r/PLANNER-7B24 to reproduce our experiments. The Preprocess folder provides the utils to construct the data. The Evaluation folder provides the code for automatic and human evaluation tools. The Planning folder contains the main code for our approach and reproduced planners for procedural planning. The Visualization folder provides the code we use to visualize in the environment.
ACKNOWLEDGMENTS
The research was sponsored by the U.S. Army Research Office and was accomplished under Contract Number W911NF-19-D-0001 for the Institute for Collaborative Biotechnologies. This work was also supported by the National Science Foundation award #2048122. We thank the Robert N.Noyce Trust for their generous gift to the University of California via the Noyce initiative. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
Appendix
Table of Contents
A SCM Theoretical Details 16
A.1 Causal Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 A.2 The Backdoor Adjustment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 A.3 The Front-door Adjustment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
B Implementation Details 19 B.1 Original Dataset Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 B.2 Counterfactual Dataset and Experiment Details . . . . . . . . . . . . . . . . . . 20 B.3 Method Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 B.4 Hyperparameter Search and Configuration Deicision . . . . . . . . . . . . . . . 21 B.5 Computation and Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
C Evaluation Details 21
C.1 Crowdsourcing Human Evaluation . . . . . . . . . . . . . . . . . . . . . . . . 21 C.2 More Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
D Qualitative Examples 29
D.1 Intermediate Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 D.2 Predicted Procedural Plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
E Discussion 34
E.1 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 E.2 Failure Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 E.3 Ethical considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
A SCM THEORETICAL DETAILS
A.1 CAUSAL PRELIMINARIES
The Structural Causal Model (SCM) is a directed acyclic graph (DAG) to describe the causal relationships within a system Pearl (2009). In this paper, we refer to the unrolled SCM along the time dimension as the full temporal causal graph, while the rolled-up version is also called the causal summary graph Peters et al. (2017). In an SCM, if the variable D is a cause of both T and Si, then it is called a confounder. A confounder opens up a backdoor path and causes a spurious correlation between T and Si. The backdoor path is defined as the remaining path between T and Si when all the arrows pointing out of T are removed. Therefore, T ← D → Si is a backdoor path. For our SCM with mediator Pi shown in Figure 4c (same as Figure 2b) from the main paper, there is no backdoor path between T and {Pi, Si−1} because only D → T is left after removing outgoing arrows of T . On the other hand, there is a backdoor path between Pi and Si, i.e. Pi ← T ← D → Si so that Pi indirectly affects the observation of Si through {T, Si−1} and D. The mediator is the variable added between treatment variable (the cause T and Si−1 in our case) and treatment variable (the effect Si in our case), and thus blocks all directed path from the cause to effect ( (Zhang et al., 2016)). The spurious correlations happens when two variables are statistically related but not causally related because of a third variable influences these two variables at the same time or the correlation is coincidental.
To identify the true causal effect between X and Y , we aim to estimate the conditional π(Y |do(X)) after intervention with the do-operator. The do-operator is to break the backdoor path by setting X to a fixed value independent of Z. Then the path Z → X can be removed to eliminate the backdoor paths. In practice, the backdoor adjustment and front-door adjustment are two fundamental methods to implement interventions and obtain the conditional π(Y |do(X)). Clarity of the Definition As a language prompt, Pi inherits the content from Pi−1 and thus can be detached from steps before Si−1 for simplicity.
Causal Intervention There are two types of operation to control the confounding bias: the backdoor adjustment and the front-door adjustment (Pearl, 2009). The backdoor adjustment is intractable in our case because it requires the prior distribution of the confounding variables. On the other hand, we can construct an input prompt as a mediator Pi for T → Si and Si−1 → Si. Then the front-door adjustment applies a two-step do-operation to mitigate bias by investigating P → Si (Pearl, 2009). Specifically, we construct the prompt mediator Pi using techniques illustrated in Section 2.2.
The pre-trained knowledge (D) in LLMs confounds language models to make biased decisions toward an unreasonable action. Since the confounder is unobservable, intervention techniques such as back-door (definition in Appendix A.2) adjustment (Hu & Li, 2021; Weber et al., 2020; Yue et al., 2020) are not applicable in our SCM. Instead, we build a mediator and implement it as a commonsense-infused prompt. Through the mediator, we can identify causal effects among goals and steps by investigating the indirect effect from the goals, which is essentially the front-door adjustment (definition in Appendix A.3) in causality (Pearl, 2009).
A.2 THE BACKDOOR ADJUSTMENT
The backdoor adjustment is one way to realize the intervention do(T = t) by considering the conditional probability over the existing data distribution with observed confounder D. Let πi denote π(·|Pi−1) that represent the probability density function conditioned on Pi−1. It calculates the average causal effects by considering all stratums of the dataset:
πi(Si|do(T )) = ∑ d πi(Si|T,D = d)πi(D = d) (5)
However, for LLMs, the pretraining data is usually unobservable and has been transformed as knowledge incorporated into the hidden space. Therefore, we are not able to directly apply the backdoor adjustment.
A.3 THE FRONT-DOOR ADJUSTMENT
The front-door adjustment is another technique to apply intervention by introducing a mediator Pi when the confounder is unobservable. As is explained in Section 2.2 from the main paper, the front-door adjustment is equivalent to two consecutive do-operations on task T and prompt Pi. We first investigate the generation of S1 and then expand it to St.
Timestep i = 1 As is shown in Figure 4a, since there is no preceding steps, the first step generation involves D, T and P1 only. Similar to the proof in Section 2.2 from the main paper, we have:
πi(S1|do(T )) = ∑ p πi(S1|do(P1 = p))πi(p|do(T ))
= ∑ p πi(p|T ) ∑ t πi(Si|p, T = t)πi(T = t) (6)
By adding intervention to T , we make the value of do(T = t) independent of the confounder D at the beginning. The backdoor path through D → T is eliminated as a result.
Timestep i > 1 As is shown in Figure 2a from the main paper, we model the mediator P1 as an effect of three variables, T , Pi−1 and Si−1. The first step of our front-door adjustment is to apply the do-operator on the three variables and observe the change in Pi as explained in Section 2.2 from the main paper. Since there are no backdoor paths between Pi and these variables, we have the probability after intervention equal to the conditional probability without intervention:
πi(Pi = p|do(T )) = πi(Pi = p|T ) (7) πi(Pi = p|do(Pi−1)) = πi(Pi = p|Pi−1) (8) πi(Pi = p|do(Si−1)) = πi(Pi = p|Si−1) (9)
The second step is to apply do-operator on Pi and then identify the causal effect as: πi(Si|do(Pi)) = ∑ t,p′,s ( πi(Si|Pi, T = t, Pi−1 = p′, Si−1 = s)
πi(T = t, Pi−1 = p ′, Si−1 = s) ) (10) Combining Equation7-9 and Equation 10, we have the front-door adjustment. Note that there are three backdoor paths from each of the variables T , Pi−1, and Si−1, as is shown in Figure 4b (drawn
in blue, red and purple). More importantly, the one through T , i.e. Pi ← T ← D → Si (the blue path in Figure 4b) and the one through Pi−1, i.e. Pi ← Pi−1 ← T ← D → Si (the red path in Figure 4b) shares the same subpath. The intervention on the task T breaks the backdoor paths for both T and Pi−1. Therefore, we have our front-door adjustment as
πi(Si|do(Si−1),do(Pi−1), do(T )) (11) = ∑ p πi(Si|do(Pi = p))πi(p|do(Si−1), do(Pi−1), do(T )) (12)
= ∑ p πi(Si|do(Pi = p))πi(p|do(Si−1), Pi−1, do(T )) (13)
= ∑ p πi(Si|do(Pi = p))πi(p|do(Si−1), do(T )) (14)
= ∑ p πi(p|Si−1, T ) ∑ s,t πi(Si|p, Si−1 = s, T = t)πi(Si−1 = s, T = t) (15)
= πi(Si|do(Si−1), do(T )) (16)
We have Equation 13 because of the intervention on T and Rule 2 (Pearl, 1995), Equation 14 because of Rule 1 (Pearl, 1995). After simplification based on Equation 12-16, we get the SCM at timestep i > 1 in Figure 4c. This is an equivalent SCM after eliminating Pi−1 in Figure 4b. The reason we could eliminate Pi−1 is as follows. We follow a common method of constructing temporally-extended prompt, which is to append the prediction at previous timesteps to the prompt at current timestep. In our case, the PG,i is the same as PG,i−1, thus Pi inherit part of the content from Pi−1, the change only depend on the Si−1. Thus Pi−1 and Si−2 are fixed, and there is no need to predict Pi−1 at timestep i again. In this way, we simplify the causal graph in Figure 4b to the one in Figure 4c. In summary, we define and simplify the causal graph based on the temporal-extended property of our prompt construction (Pi inherit the content from Pi−1). We end up with Equation 14-16 which is shown as Equation 3 in Section 2.2 from the main paper.
B IMPLEMENTATION DETAILS
B.1 ORIGINAL DATASET DETAILS
RobotHow This dataset is Attribution-NonCommercial-ShareAlike 4.0 International Creative Commons License. We evaluate the inference of 150 tasks by random selection from the dataset. Each program contains the task name, task description and steps. We use the task name and sequence of steps as our input and output references.Each step is a composition of [Action], [Object] and [Number]. For example, the sequence of steps of the task ”Watch TV” are: 1. [Walk] <TELEVISION> (1) 2. [SwitchOn] <TELEVISION> (1) 3. [Walk] <SOFA> (1) 4. [Sit] <SOFA> (1) 5. [Watch] <TELEVISION> (1).
WikiHow This dataset2 is under an Attribution-Noncommercial-Share Alike 3.0 Creative Commons License. And the text content is free to modify, republish and share. We evaluate the inference of 1000 tasks by random selection from the dataset. The admissible action space and interaction object space are more complex than the programs in RobotHow. And there is no fixed ”[Action] ¡Object¿ (Number)” form of each step. For each article, it contains the title, the bold headlines and text. We utilize the title and headlines as our task name and steps respectively.
External Knowledge Base For the external knowledge base, we utilize ConceptNet to leverage commonsense reasoning ability to help ground language generation in goal-guided procedural text generation. ConceptNet (Speer et al., 2017) captures commonsense knowledge explicitly with triplets of (head node, relation, end node). It contains 799, 273 nodes and 2, 487, 810 edges that represent both symmetric and asymmetric relations. Specifically, the core relations we utilized are Synonym, AtLocation, CapableOf, Causes, CausesDesire, HasPrerequisite, HasSubevent, and UsedFor. Since we are looking at the commonsense knowledge in house-holding tasks, so we filter out the relations (/r/DistinctFrom, /r/DerivedFrom, /r/SymbolOf, /r/EtymologicallyRelatedTo, /r/EtymologicallyDerivedFrom) that are related to the linguistic.
2https://www.wikihow.com
B.2 COUNTERFACTUAL DATASET AND EXPERIMENT DETAILS
Table 6 show the examples that compare the original program and the counterfactual program of each intervention method are also provided. Specifically, for Initial Configuration, we randomly append the location to a given task name to constrain the location of completing the task. The steps are prepended with the initial step ”walk to ¡Location¿”. For Intermediate Step, we randomly sampled a step from the task-specific program and append it to the task name to constrain the way to implement a given task. For Final Goal, we randomly combine two tasks by combining both the task names and the programs to construct a set of long-horizon composite tasks.
We conduct counterfactual experiments by applying randomly selected intervention methods over RobotHow. And we only apply the Intermediate Step intervention method over WikiHow due to the loose configuration requirement and the long text of the WikiHow contents. Note that the performance gain of PLAN under the counterfactual setting mainly comes from the additional guidance of the task introduced from the Intermediate Step intervention method. However, the baselines mostly experience performance drops due to the limited annotated exemplars. PLAN consistently outperforms baselines by a large margin, indicating its superiority under the counterfactual setting.
B.3 METHOD DETAILS
The existing formalization of the procedural planning task can be mainly categorized as 1) sequential choice making (Lyu et al., 2021; Wu et al., 2022; Zhang et al., 2020a;b), which reasons about the next step from the options given, the task, and previous steps; 2) conditioned generation (Huang et al., 2022; Ahn et al., 2022), which generates the temporally extended plans to implement the task. We study the procedural planning task as the conditioned generation problem (Huang et al., 2022; Ahn et al., 2022) since it resembles real-world scenarios.
Baselines LLMaP propose a procedure to extract temporally extended plans from large pre-trained language models. Chain explores manually creating exemplars that mimic the reasoning process
and uses them to prompt large language models for reasoning tasks. To compare with Chain on the procedural planning task, we manually generate exemplars that contain the chain of thought for 1% of the inference task programs. Note that for the BART language model, we use BART-large version. And we use the 1.5 billion parameter GPT-2 (aka gpt2-xl). For the translation model LMT , we use sentence-transformers (RoBERTa-large). All these models are released by HuggingFace. In addition, our experiments with GPT3 (davinci) use OpenAI API (May, 2022).
External Knowledge Graph Conceptnet5 define a set of 34 relations (3). Within the relations we consider in the procedural planning task, the averaged sampling time of subgraph sampling is 0.03576 milliseconds per task program.
B.4 HYPERPARAMETER SEARCH AND CONFIGURATION DEICISION
We perform a hyperparameter search for all evaluated methods for the following hyperparameters.
• The confidence threshold θ, which terminate the generation when below it, is searched in {0, 0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8}.
• The steps horizon, which constrains the maximal number of procedural planning steps, is searched in {10, 20, 40}.
• The number of hops for retrieving the subgraph from the external knowledge base is searched in {1, 2, 3}.
• The ratio of maximal concepts to the length of the task name is searched in {1, 2, 3}. • The cosine similarity threshold for keeping the task-specific concept is searched in {0.4, 0.6, 0.8}.
• The edge weight threshold θe is searched in {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8}. • The top-k task-specific nodes value is searched in {1, 5, 10, 15, 20, 25, 50, 100}.
The configurations used in the experiments are: θ=0.7, 20 step horizon, 3 hops, 3 ratio of concepts to task length, cosine similarity threshold 0.4, θe=0.6 and k=10.
We empirically choose the hop number H as 3 considering both the input length limit of the LLMs and the fact that 3-hop contains reasonable relevant information in practice (Zhang et al., 2022).
B.5 COMPUTATION AND RESOURCES
We use one single NVIDIA A100 GPU Server for all the experiments. Since there is no training in our zero-shot settings, the computation is only used for the inference stage of the experiments.
C EVALUATION DETAILS
C.1 CROWDSOURCING HUMAN EVALUATION
We conduct all the human evaluations (rating and win-lose comparison) on Amazon Mechanical Turk platform. Each example is rated by 3 annotators. We ask Amazon Mechanical Turk workers, for every assignment, to evaluate the quality of the provided low-level steps given the high-level task description. For the Win-Lose Comparison, they were asked to choose one from the two provided model generated results by 1:the first one is better, 2:equal and 3:the second one is better. For the Human Ratings, they were asked to score each sample with 5-point Likert scale. This process does not involve collecting any personal information. And we manually check no offensive content is produced by the models.
The assignment layout templates for workers are shown in Figure 7 and Figure 6. Specifically, we evaluate randomly selected 50 task examples from each dataset (RobotHow and WikiHow) under all the settings (standard and counterfactual). We only collect the examples that the workers read the instructions carefully by checking whether they give 1 score for the empty program as a sanity check. The hourly wage paid to participants is estimated $9. And the total amount spent on participant
3https://github.com/commonsense/conceptnet5/wiki/Relations
compensation is $1296. The details of the Human Intelligence Tasks process are described in the following sections.
C.1.1 WIN-LOSE COMPARISON
During the process of Human Intelligence Tasks, the workers are shown the following instructions: Read the given task and the sequence of steps, determine which set of steps can better complete the target task. In other words, can the task be decomposed into these steps? Please consider the sequential order of the steps.
Then the program to be evaluated is provided as:
Question Task: Study
Sequence 1:: Step 1: Walk to textbook Step 2: Read book Step 3: Walk to book
Sequence 2:: Step 1: Walk to home office Step 2: Find desk
Finally, the workers are asked to score the program by following the instructions below: Select an option: 1 - Sequence 1 is better; 2 - Tie; 3 - Sequence 2 is better
The above example is to evaluate the order metric, for the coverage metric, the same process are conducted, except for the instructions are: Read the given task and the sequence of steps, and determine which sequence covers more steps that are necessary to complete the target task. Please ignore the sequential order of the steps.
C.1.2 HUMAN RATINGS
Similar as the Win-Lose Comparison Human Intelligence Tasks, the workers are shown the following instructions: For every question below, determine whether the task can be completed in any reasonable scenario using the provided steps (Please consider the sequential order of the steps.). You could directly give the lowest score (1) for the empty steps. In other words, can the task be decomposed into these steps? (Please consider the sequential order of the steps.)
Then the program to be evaluated is provided as:
Question Task: Write an email
Sequence of Steps: Step 1: Walk to home office Step 2: Walk to computer Step 3: Find computer Step 4: Turn to computer Step 5: Look at computer Step 6: Walk to computer Step 7: Find chair Step 8: Sit on chair Step 9: Find keyboard Step 10: Grab keyboard Step 11: Find mouse Step 12: Grab mouse Step 13: Type on keyboard
Finally, the workers are asked to score the program by following the instructions below: Use the slider below to indicate how much you agree with the following statement (1 = Strongly disagree, 5 = Strongly agree). If ”sequence of steps” are blank, please directly choose 1 (lowest score). The task can be completed in any reasonable scenario using the provided steps. [SLIDER PROVIDED
HERE]
The above example is to evaluate the order metric, for the coverage metric, the same process is conducted, except for the instructions are: For every question below, determine whether the task can be completed in any reasonable scenario using the provided steps (Please ignore the sequential order of the steps.). You could directly give the lowest score (1) for the empty steps. In other words, can the task be decomposed into these steps? (Please ignore the sequential order of the steps.)
C.2 MORE RESULTS
Significance Test We provide paired-t test (p¡0.05) statistics results for Table 2. On RobotHow, our PLAN significantly outperforms all baselines on Original-Order(BART) and CounterfactualCoverage(GPT2). On WikiHow, our PLAN significantly outperforms all baselines on OriginalCoverage(BART, GPT2), Counterfactual-Coverage(BART, GPT2), and Counterfactual-Order(BART). For the coverage metric under the counterfactual setting, the human-provided program is not significantly better than our PLAN.
We also conduct the paired-t test (p¡0.05) statistics results over the variant “w/o Adaption” and “w/o Symbolic”. Compared with the full model PLAN, the variants experienced a statistically significant
performance drop. Especially on BERTScore-f1, the p-value is 8.884e−13 and 1.4e−8 respectively. This further confirms the importance of the modules.
Results on GPT-3 In addition, we conduct experiments with GPT-3 (davinci version) using OpenAI API. We showcase the comparison in Table 9 and Table 10.
Motivation of Evaluation Metrics Since the nature of the procedural planning task can be opendomain in that the golden plans may not be unique. This leads to the challenge that common automatic metrics proposed in natural language task are not perfect to evaluate procedural planning. The same observations of such challenge to directly judge the system using automatic metrics are discussed in LLMaP(Huang et al., 2022) as well. We assume that the human evaluation on Coverage and Order can reflect how well the procedural plans are close to human annotated program, because the human annotators are required to determine whether the task can be completed in any reasonable scenario using the procedural plans explicitly. Thus we provide both the automatic evaluation and human evaluation on two aspects Coverage and Order, with description in the Metrics paragraph in Section 4.1.
Evaluation on Success Rate Metric To make human evaluations more intuitive, we provide an additional Success Rate metric to show whether the procedural plans can successfully implement the task, which focus more on the success rate instead of the coverage or the order of the plans. We show the Success Rate evaluations on the baselines and our method in Table 11. The assignment layout template for workers is shown in Figure 8.
More Ablation To verify the contribution of the first translation language model LMT that translates the knowledge prompt PG into admissible one P̂G, we conduct an additional ablation experiment by simply removing the first LMT and replacing P̂G with PG to prompt the LLM for procedural planning. We provide results with comparisons to other ablations in Table 12.
Results on Counterfactual Task Samples We show automatic evaluation results on counterfactual RobotHow in Table 13.
D QUALITATIVE EXAMPLES
D.1 INTERMEDIATE OUTPUT
We provide running examples with intermediate output for each module in the following paragraph. First, we show the intermediate output of input task T , the subgraph Gs depicted in the tuple of the start node, relation type, tail node and edge weight, the knowledge prompt PG and the translated one P̂G as below:
• Input task T : Take shower.
• Human-annotated Plan Reference: Step 1: Walk to bathroom. Step 2: Walk to clothes dress. Step 3: Find clothes dress. Step 4: Put off clothes dress. Step 5: Find shower. Step 6: Enter shower. Step 7: Find soap. Step 8: Grab soap. Step 9: Scrub soap. Step 10: Put back soap. Step 11: Leave shower. Step 12: Find towel. Step 13: Grab towel. Step 14: Wipe towel. Step 15: Find clothes dress. Step 16: Put on clothes dress.
• Task-relevant subgraph Gs(Nhead, Re, Ntail, Ew): (take a shower, HasLastSubevent, dry off, 6.0); (bathe, HasLastSubevent, dry off, 6.0); (take a shower, HasPrerequisite, take out your clothes, 4.47); (take a shower, HasSubevent, get clean, 4.47); (take a shower, HasPrerequisite, take your clothes off, 3.46); (go to a party, HasPrerequisite, take a shower, 2.82); (play lacrosse, HasLastSubevent, take a shower, 2.82); (get clean, HasPrerequisite, take a shower, 2.82); (take a shower, MotivatedByGoal, wash your hair, 2.82); (play sports, HasLastSubevent, take a shower, 2.82); (go to the hairdresser, HasPrerequisite, take a shower, 2.82); (take a shower, HasPrerequisite, turn on the water, 2.0); (have a bath, HasLastSubevent, dry off, 2.0); (get wet, HasSubevent, dry off, 2.0); (become more clean, HasLastSubevent, dry off, 2.0); (take a shower, HasSubevent, wash your hair, 2.0); (take a shower, HasLastSubevent, turn off the water, 2.0); (become more clean, HasLastSubevent, dry off, 2.0); take a shower, HasLastSubevent, put your clothes on, 1.0); (take a shower, HasSubevent, use shampoo, 1.0); (take a shower, HasSubevent, wash behind your ears, 1.0); (take a shower, HasSubevent, wash your body, 1.0); (take a shower, HasPrerequisite, go to the bathroom, 1.0); (take a shower, HasPrerequisite, go to the bathroom and undress, 1.0); (take a shower, HasPrerequisite, step into the shower, 1.0); (take a shower, HasPrerequisite, soap up, 1.0); (take a shower, HasPrerequisite, bring some soap, 1.0); (take a shower, HasPrerequisite, bring some shampoo, 1.0); (take a shower, HasLastSubevent, towel yourself off, 1.0); (take a shower, HasPrerequisite, remember to take your towel, 1.0);
• Knowledge prompt PG: Step: take out your clothes. Step: set clean clothes. Step: go to the bathroom. Step: go to the bathroom and undress. Step: take your clothes off. Step: turn on the water. Step: bring some soap. Step: bring some shampoo. Step: remember to take your towel. Step: get clean. Step: wash your hair. Step: use shampoo. Step: wash behind your ears. Step: wash your body. Step: turn off the water. Step: dry off. Step: become more clean. Step: put your clothes on.
• Translated knowledge prompt P̂G: Step: find clothes underwear. Step: wash shower. Step: walk to bathroom. Step: look at shower. Step: take off clothes underwear. Step: wash shower. Step: find soap. Step: grab soap. Step: scrub shower. Step: find shampoo. Step: grab shampoo. Step: find towel. Step: wipe shower. Step: wash hair. Step: wipe shampoo. Step: scrub shower. Step: wash body. Step: switch off washing machine. Step: scrub shower. Step: wipe shower. Step: put on clothes underwear.
• Generated plan ST : Step 1: Find clothes underwear. Step 2: Walk to Bathroom. Step 3: Take off clothes shirt. Step 4: Scrub shower. Step 5: Pour shampoo into hair. Step 6: Wipe shampoo. Step 7: Wipe hair. Step 8: Wash body. Step 9: Find Towel. Step 10: Put on clothes underwear.
D.2 PREDICTED PROCEDURAL PLANS
More qualitative examples of final predicted procedural plans that are randomly selected are provided. Table 14 show random samples on the original dataset. Table 15 show random samples on the counterfactual datasets with the Intermediate Step intervention method. And Table 16 shows random
samples on the counterfactual RobotHow with the Initial Configuration and Final Goal intervention methods.
Model Program RobotHow Task: Play Games
Human Step 1: Walk to bedroom.Step 2: Walk to video game console. Step 3: Find video game console.Step 4: Walk to video game console. Step 5: Switch on video game console.Step 6: Find television. Step 7: Switch on television.Step 8: Find video game controller. Step 9: Grab video game controller.Step 10: Find chair. Step 11: Walk to chair.Step 12: Sit on chair
Chain Step 1: Put chef knife on water glass.Step 2: Find sink.
LLMaP Step 1: Walk to bedroom.Step 2: Walk to video game console.Step 3: Find video game console
Model Program RobotHow Initial Configuration Task: Play games in dining room.
Human Step 1: Walk to dining room.Step 2: Walk to video game console. Step 3: Find video game console.Step 4: Walk to video game console. Step 5: Switch on video game console.Step 6: Find television. Step 7: Switch on television.Step 8: Find video game controller. Step 9: Grab video game controller.Step 10: Find chair. Step 11: Walk to chair.Step 12: Sit on chairTask: Play games in dining room. Step 1: Walk to video game controller.Step 2: Put video game controller on diningtable. Step 3: Put boardgame on kitchen table.Step 4: Put boardgame on diningtable.
Chain Step 1: Walk to dining room.Step 2: Walk to sauce pan. Step 3: Pour bottle water into dish bowl.Step 4: Walk to water. Step 5: Walk to carrot.Step 6: Walk to food salt.
E DISCUSSION
E.1 LIMITATIONS
Though pointing out a direction to prompt out actionable knowledge in large-scale pre-trained language models with external commonsense knowledge, the limitations of reasoning long-horizon procedural plan still exist. Existing datasets for procedural planning like WikiHow and RobotHow are all monolingual supporting only English goals and plans. In the future, it is important to expand these datasets or having novel datasets that support multiple languages used across the world. The inherent difference between these languages may also result in different planning strategies in granularity or abstraction levels, which is potentially challenging. In addition, the long-horizon and complex composite tasks still remain challenging for the existing procedural planners.
Above limitations are discussed mainly based on the challenges of procedural planning task. In addition, there are limitations of our implementation that are guided by our causal analysis. First, the coverage of the leveraged external resources is limited, which is common in a knowledge-enhanced system. This may result in the wrong understanding of the task and produce not reasonable procedural plans. For example, the knowledge of the word ”Turking”, which refers to ”The act or process of performing small tasks using the Amazon Mechanical Turk service.” according to Wiktionary, is not covered in the external resources (e.g., ConceptNet). Since our proposed system does not assume specific external resources. It is plausible in the future if we utilize more powerful external resources (e.g., Wiktionary). Second, the hop number and the threshold of the multi-hop retrieval in taskrelevant subgraph sampling is currently a configured hyperparameter. This may result in not ideally constructed prompt. The future work could instead make these hyperparameters learnable on each task domain, and also explore the pros and cons between end-to-end commonsense-infused prompt versus neuro-symbolic constructed prompt.
E.2 FAILURE ANALYSIS
We discuss detailed failure modes and examples with analyses below. For example, the predicted procedural plan on task ”Turking”, which refers to ”The act or process of performing small tasks using the Amazon Mechanical Turk service.” according to Wiktionary. We compare the predicted procedural plan on this task among baselines and our method: (1) The ground truth plan is ”Task: Turking. Step 1: Walk to home office.Step 2: Walk to desk.Step 3: Find chair.Step 4: Sit on chair.Step 5: Find computer.Step 6: Switch on computer” (2) The plan predicted by Chain baseline is empty. (3) The plan predicted by LLMaP baseline is ”Task: Turking. Step 1: Put teddybear on oven.” (4) Our prediction is ”Task: Turking. Step 1: Eat food turkey. Step 2: Drink water. Step 3: Sleep.” We can see that for the ”out-of-knowledge” task, our method also lead failure planning. We assume this is mainly due to the limited knowledge in external resources, as discussed in the Appendix E.1, and this main failure mode can be avoided by introducing larger external resources (e.g, Wiktionary), similar as other knowledge-enriched methods.
E.3 ETHICAL CONSIDERATIONS
We hope to de-bias the procedural planning to avoid misleading either humans or robots with daily life instructions, which may result in unsafe situations. The cultural bias behind these datasets can be a critical issue for future work. As the ground truth planning steps usually reflect the culture shared by the English-speaking group, other cultures may have a completely different practical consideration that leads to different orders of these steps or even novel steps that are not proposed by the LLMs we utilized in this paper. In the future, we will consider cultural bias as a proxy variable so that we could adjust the implicit knowledge from LLM or commonsense from external sources according to the different needs of cultural backgrounds. | 1. What is the main contribution of the paper regarding symbolic reasoning in neural models?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of novelty and generalizability?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any questions or concerns regarding the paper that the reviewer would like to be addressed? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This work shows that by adding a dash symbolic reasoning to a neural model, it shows better performance w.r.t consistency and generalization. It is unclear to me how much engineering efforts are required to add these symbolic reasonings. It appears their approach is general, using a common External Knowledge Base, but the writing is confusing and I cannot tease out the details.
Strengths And Weaknesses
strength : the proposed approach works. they conduct a user study where they ask crowd workers to rate which agent, one with symbolic reasoning and one without, performed better on a task, and the crowd workers preferred the agent with symbolic reasoning. this result is solid and shows evidence of the proposed approach.
weakness :
the proposed method is may not be entirely novel. people have been adding symbolic reasoning to neural models for awhile, and the finding has always been : "If we can successfully 'hack' the underlying DSL that represented the set of tasks, adding symbolic reasoning would perform well". For instance, these works tend to follow the steps of: 1) identify a set of tasks that would be easily represented with symbolic execution, and 2) devote significant engineering efforts to construct the DSL and a symbolic interpreter to help the neural/llm model make better inferences/plans.
this work would be of significant contribution if it can show that steps 1) and 2) can be avoided by using a generic external knowledge base (as shown in figure 3). however the writing is too confusing I cannot be sure if that is the case or not.
Clarity, Quality, Novelty And Reproducibility
clarity : poor. And this is a huge problem because the writing prevented me from judging the work clearly.
In the introduction there's this block of text that reads
"adjustment (Hu & Li, 2021; Weber et al., 2020; Yue et al., 2020) are not applicable in our SCM. Instead, we build a mediator and implement it as a commonsense-infused prompt. Through the mediator, we can identify causal effects among goals and steps by investigating the indirect effect from the goals, which is essentially the frontdoor adjustment in causality (Pearl, 2009)."
What is it even saying? I have zero clue. What is a mediator? What is a commonsense-infused prompt? What are these "indirect affect from the goals" mean? What is "essentially the frontdoor adjustment" mean?
These are highly technical terms that mean very little unless explicitly defined. The reader tends to look for easy metaphors and intuitions on why your approach should work, and why intuitively it should work well. This passage sounds intuitive, yet it uses words that nobody know what they mean (yet), and ended up being just gibberish.
This confusion continued for the rest of the paper, making it hard for me to judge if it is worthwhile.
A re-write of the intro section is warranted, with a concrete example explaining why the proposed approach should work well, without the jargons.
I highly recommend the authors ask people outside of their immediate project -- walk down the hallway a few offices and knock on some doors -- ask these people to read the paper and give feedbacks, and adjust the paper based on what was confusing.
quality : unclear / potentially good. The human-evaluation is clearly stated, and I can feel confident in saying "the approach is performing better than the baseline". However, I would also like to make an assessment on "is this approach general? or is it domain-specific and hacky?", this is hard to judge as the work seemed very complex with many moving parts (in figure 3 there's 5 stages), and the writing isn't clear.
novelty : unclear / potentially good. same reason as above. If this work is generalizable to different domains with very little tweaking, then it definitely has merits most prior works that brings symbolic reasonings into neural models heavily rely on a DSL, i.e. a domain SPECIFIC language, and isn't really generalizable. |
ICLR | Title
Neuro-Symbolic Procedural Planning with Commonsense Prompting
Abstract
Procedural planning aims to implement complex high-level goals by decomposition into simpler low-level steps. Although procedural planning is a basic skill set for humans in daily life, it remains a challenge for large language models (LLMs) that lack a deep understanding of the cause-effect relations in procedures. Previous methods require manual exemplars to acquire procedural knowledge from LLMs in the zero-shot setting. However, such elicited pre-trained knowledge in LLMs induces spurious correlations between goals and steps, impairing the model’s generalization to unseen tasks. In contrast, this paper proposes a neuro-symbolic procedural PLANner (PLAN) that elicits procedural knowledge from the LLMs with commonsense-infused prompting. To mitigate spurious goal-step correlations, we use symbolic program executors on the latent procedural representations to formalize prompts from external knowledge bases as a causal intervention toward the Structural Causal Model of procedural planning. Both automatic and human evaluations on WikiHow and RobotHow show the superiority of PLAN on procedural planning without further training or manual exemplars.1
1 INTRODUCTION
How to make a cup of coffee? As humans, we can easily specify a procedure to solve this task, using our innate ability of commonsense reasoning. However, can we endow machines with the same ability to construct a sequential plan? As depicted in Figure 1, procedural planning (Pearson, 1996; Zhang et al., 2020b; Huang et al., 2022) aims to decompose a high-level goal (Task: Watch TV) into a sequence of temporally extended steps (Procedural Plan: Step at all five time-steps).
We study procedural planning as the conditional text generation problem since it resembles real-world scenarios. Previous approaches (Huang et al., 2022; Ahn et al., 2022) require a small number of carefully written or held-out exemplars to acquire procedural knowledge. However, these manual exemplars evolved from task data are impossible to cover the ever-changing task setups and the flexible dependency relations among goals and steps. In fact, the biased data may cause the model to learn spurious correlations and hinder the model from generalizing well in zero-shot scenarios. Studies in cognitive science show that humans rely on chunking mechanisms (Gobet et al., 2001; Miller, 1956) which turn primitive stimuli into conceptual groups to solve novel and complex problems. Inspired by this, we hypothesize that generalizable procedural planning ability can be achieved by learning cause-effect relations among complex goals and simpler steps using external knowledge.
To reveal the cause-effect relations in procedural planning, we devise a Structural Causal Model (SCM) (Peters et al., 2017), a directed acyclic graph commonly used to describe the causal relationships within a system Pearl (2009). As depicted in Figure 2, the pre-trained knowledge (D) (e.g., TV and living room is highly correlated) in LLMs confounds (D influences T , Si−1 and Si, resulting in spurious correlations) the system to make biased decisions toward an unreasonable step (e.g., Find
1Source code and datasets are publicly available at https://sites.google.com/view/iclr-clap
Television). Thus, we adopt front-door adjustment (definition in Appendix A.3), which utilizes a mediator (Pi) that blocks all directed paths from the cause (T or Si−1) to the effect (Si). In this way, T (or Si−1) affects Si by flowing through indirect paths: T (or Si−1) affects Pi and Pi affects Si. And we can identify the causal effects among goals and steps by investigating the indirect effect (Equation 3), which is computed by multiplying the effect of T (or Si−1) on Pi−1 (Equation 1) with the effect of Pi on Si (Equation 2). With the above front-door adjustment, we can mitigate the spurious correlations (e.g., between ”television” and ”living room”) and thus make reasonable decisions on steps (e.g., Find book). Please refer to A.1 for causal preliminaries (including explanation for SCM, confounder, mediator, spurious correlations), and A.3 for the front-door adjustment definition.
Guided by the above causal analysis of procedural planning, we need to construct the mediator Pi and then intervene on task T and prompt Pi, which is required to compute the conditional probability in Equation3. As depicted in Figure 3, we seek to automatically construct commonsense-infused prompts as the mediator Pi by concatenating the task, previous steps with commonsense knowledge extracted from external resources (e.g., ConceptNet (Speer et al., 2017)). First, we modify the goal input by sampling a task-relevant knowledge subgraph (Stage1 in Section 3.1) to implement interventions on T . Then, we modify the prompt by adapting the edge weight to implement interventions on Pi (Edge-Wise Adoption of Stage2 in Section 3.1). However, directly incorporating knowledge of graph structure into LLMs leads to the loss of the logical order in eliciting procedural knowledge from LLMs. Thus, we apply symbolic executors (Mao et al., 2019; Yi et al., 2018) that execute the sequential mapping program on latent knowledge representations (e.g., the subevent of). In this way, we transit graph structure knowledge into natural language that preserves procedural structure, such as the sequential order of two low-level steps (Symbolic Structuring of Stage2 in Section 3.1). The procedural prompt PG (e.g, “please get the remote control”) is further translated into admissible one P̂G (e.g., “grab remote control”) from available steps in a certain domain (RobotHow or WikiHow in our case). Finally, we utilize the commonsense-infused prompt P̂G to control the generation of procedural plans in LLMs in a zero-shot setting (Section 3.2).
We conducted experiments on RobotHow (Puig et al., 2018) and WikiHow (Koupaee & Wang, 2018) under original and counterfactual situations. Our major contributions can be summarized as:
• We develop the first causal framework for procedural planning by 1) defining a temporally extended Structural Causal Model and 2) resolving spurious correlation between high-level goals and low-level steps via front-door adjustment with a prompt-based mediator. • We propose a neuro-symbolic approach to construct commonsense-infused prompts for LLMs to tackle the procedural planning task without manual exemplars or further training. • Extensive evaluations show the superiority of PLAN in terms of reasoning about the causeeffect relations among goals and steps and achieving promising planning ability.
2 EXTERNAL KNOWLEDGE MATTERS IN PROCEDURAL PLANNING
As depicted in Figure 1, procedural planning requires generating the Plan (e.g., Step 1: Walk to the living room.) conditioned on the Task (e.g., Watch TV). We first describe the problem definition
and then show why external knowledge matters in procedural planning through the lens of causality. Finally, we show how we elicit procedural ability from the Large Language Models (LLMs).
2.1 PROBLEM DEFINITION
Given the high-level task T (e.g. watch television in the living room) sampled from a task domain MT (e.g. RobotHow), a procedural planner aims to decompose it into lower-level temporally extended steps ST = {S1, ..., Si|Si ∈ S̄}. There exists certain admissible plans S̄, which is a fixed set constrained by the task domain MT (e.g., the affordance of the interacted objects). The plan Si at timestep i is generated as π(Si|T, S0:i−1).
2.2 A CAUSAL LOOK AT PROCEDURE PLANNING WITH LLMS
We seek to empower the LLMs with the ability to reason cause-effect relations in procedural planning. Thus, we devise a causal framework by first defining a Structural Causal Model (SCM) of procedural planning in Figure 2. The SCM describes the temporal dynamics and procedural cause-effect relationship. Our causal assumption in SCM indicates that there is a backdoor path from task to step, which must be blocked with front-door adjustment. Therefore, we model the input prompt as a mediator which is created from external knowledge. More specifically, we define our Full Temporal Causal Graph as in Figure 2a, which is an unrolled Structural Causal Model (SCM) for sequential decision-making. Our goal is to identify the causal relations between the attended task T and plan procedures ST = {S1, S2, . . .} from LLMs. Initially, there are direct paths T → Si and Sk → Si, k < i because Si relies on the LLM attended task entities and previous accomplished steps. D is an unobserved confounder from learned knowledge during pre-training. D builds a backdoor path between T and Si and misguides the LLMs to attend to false entities to generate the next step (see Fig. 2b). Note that D is unobservable as we directly adopt the LLM without knowing the pre-training data. To mitigate the spurious correlation, we then introduce a mediator Pi for each Si as shown in Figure 2a. To achieve our front-door adjustment, we inject external knowledge into LLMs with a neuro-symbolic approach by adopting three stages described in Section 3.1.
3 OUR APPROACH
Although LLMs have strong general language intelligence, they still perform poorly in reasoning the cause-effect relations in procedural plans due to a lack of daily life experience. We propose to elicit the unbiased procedural planning knowledge from the LLMs using the created commonsense-infused Prompt P as π(Si|T, S0:i−1, P ). Figure 3 and Algorithm 1 depict how PLAN tackles the procedural
planning in a five-stage manner. We illustrate the commonsense-infused prompt construction (the first three stages) in Section 3.1 and planning with LLMs (the last stage) in Section 3.2.
3.1 COMMONSENSE-INFUSED PROMPT CONSTRUCTION
Overview Inspired by the causal analysis in Section 2.2, we propose to construct commonsenseinfused Prompt P that helps reveal the cause-effect relations among the goals and steps during procedural planning within 3 stages: 1) Stage1 sample a subgraph Gs from the external knowledge base G by extracting task(T )-relevant nodes. 2) Stage2 adapt the edge weight Ew in Gs and apply symbolic structuring to get the admissible knowledge prompt P̂G. 3) Stage3 acquire the temporal order by temporally aggregated the prompt Pi with previous steps S0:i−1.
Stage1:Task-Relevant Knowledge Subgraph Sampling First, we investigate the causal effect T → Pi and Si−1 → Pi (Figure 2). Si is a collider that blocks the association between D and Pi in the path T ← D → Si ← Pi. Let πi denote π(·|Pi−1) that represent the probability density function conditioned on Pi−1. Since there is no backdoor path for T → Pi and similarly for Si−1 → Pi, we simply have the conditional probability after applying do-operators:
πi(Pi = p|do(T )) = πi(Pi = p|T ), πi(Pi = p|do(Si−1)) = πi(Pi = p|Si−1) (1)
We achieve the do-operation in a prompting way by modifying the goal input so that the model attends to the task-relevant entities. To implement, we use NLTK to tokenize and pos tag the task text T . Then we use the noun (e.g. television), noun phrases (e.g. remote control), and verb phrases (e.g. watch television) as the concept node. In this way, the task name T is Semantically Parsed into the Concept Set TE . Each concept e ∈ TE is used as a query for sampling the H-hop task-relevant subgraph Gs ⊆ Ne×Rs×Ne from the external knowledge base G ⊆ N ×R×N , whereN andR represent the number of concept nodes and commonsense relations respectively. When extracting Gs, we keep the triplets with relation type in the household domain (e.g., AtLocation, UsedFor) and filter out ones in the linguistic domain (e.g., DistinctFrom, DerivedFrom) for the procedural planning task. Ne is maintained in a set of top-k task-relevant nodes using the weight of each Re, which is updated with edge-wise adaption in Stage2.
Stage2:Edge-Wise Adaption and Symbolic Structuring Second, we need to find the causal effect for Pi → Si. Since the path Pi ← T ← D → Si contains a backdoor from Pi to Si, we cannot rely on the conditional probability. Instead, we intervene on Pi using do-operator to cut off D → T :
πi(Si|do(Pi = p)) = ∑ t,s πi(Si|p, T = t, Si−1 = s)πi(T = t, Si−1 = s)
= ∑ t,s πi(Si|p, T = t, Si−1 = s)πi(Si−1 = s|T = t)πi(T = t) (2)
The retrieved concept-centered graph has multiple edges representing various relationships with other actions/entities. Therefore, the summation over intervened T can be achieved by incorporating these edges into the prompt. For instance, “living room” can be “walked to” and “used for reading” while “book” can locate in “living room” and “bedroom”. Similarly, we extrapolate over the edges for i − 1 hops to aggregate the intervened Si, i.e. P (Si−1 = s|T = t). Directly ranking the retrieved nodes Ne with the annotated weight (Ew) in the external knowledge base will result in a spurious correlation. Because such retrieved local subgraphs tend to capture the task-invariant concept nodes as the causal factors. To mitigate this, we propose to adapt the weight of each triplet (Edge-wise Adaption). The adapted weight is the addition of the original edge weight and the cosine similarity between the tail node embedding nEtail of the edge Re and the task embedding vtask as: Êw ← Ew + cosine(nEtail ,vtask). The embeddings are projected from the node text and task name using the sentence-transformer (Reimers & Gurevych, 2019). The nodes Ne are finally retrieved by ranking the adapted weight Êw. To better track the utilized external knowledge during inference, we construct the task-dependent commonsense prompt with a Symbolic Executor (Symbolic Structuring) guided by the relation type of each triplet in Gs with the adapted edge weight beyond threshold θe. Specifically, the Symbolic Executor acquires the neural information of each natural language node and executes the sequential mapping program by sampling the operation Op from the Symbolic Rule Set R according to the edge relation type. The Symbolic Rule Set R is obtained by mapping the description of the relations (e.g., AtLocation represent ‘A is a typical location for B, or A is the inherent location of B. Some instances of this would be considered meronyms in WordNet.’) in the external knowledge graph (e.g., ConceptNet) to symbolic operations (e.g., Op AtLocation). For instance, the AtLocation edge samples the operation Op AtLocation from R, which takes the commonsense relation of the triplet from Gs as the parameters to query the procedural concept output given the natural language meaning of the linked nodes (e.g., go to the location of Start Node Of(re) in this case). Similarly, Op UsedFor may refer to ”go to find End Node Of(re) and use it for Start Node Of(re)”. And operators Op HasSubevent and Op HasPrerequisite will recursively navigate the subgraph Gs. After navigating the subgraph, we linearize the transformed triplets as the Procedural Prompt PG, which is then translated to Admissible Knowledge Prompt P̂G by the Translation Language Model LMT .
Stage3:Temporally-Extended Aggregation To acquire temporal order in the procedure, we obtain the Prompt P at timestep i with the aggregation of task T , history steps S0:i−1 and current external knowledge P̂G. The underlying causal mechanism is a combination of Eq. 1 and Eq. 2:
πi(Si|do(T ), do(Si−1)) = ∑ p πi(Si|do(Pi = p))πi(p|do(T ), do(Si−1))
= ∑ p πi(p|T ) ∑ t,s πi(Si|p, T = t, Si−1 = s)πi(T = t, Si−1 = s) (3)
The adjustment and marginalization in Eq. 3 is achieved in the input space by forming the Procedural Prompt PG that allows the LLM to attend on the causal entities instead of the highly correlated ones for the next step generation. The LLM can reason over the most relevant edges to link the concepts with the task entities as a context. The prompts from knowledge bases are independent of the pre-training data distribution so that Pi is independent of D and satisfies the front-door criterion. Please refer to Appendix A.3 and Figure 4 for the simplification of our structural causal model.
3.2 PROCEDURAL PLANNING WITH LARGE LANGUAGE MODELS
Stage4:Semantic Generation The external knowledge is further concatenated with the goal input (T ) as the initial prompt. Given the prompt, the language model Generation LMG ∈ {PAR, PAE} (e.g., GPT3, BART) generates the next sentence, and the most confident prediction is then appended to previous prompts. The Termination Condition is either reaching the max step t or the matching score is below threshold θ. The joint probabilities of auto-regressive (PAR) and auto-encoder (PAE) model is factorized as:
πAR(x) = n∏ i=1 p(sn|P̂G, s1:n−1, T ), πAE(x) = n∏ i=1 p(sn|P̂G, {s1:n−1, [MASK]}, T ) (4)
where P̂G represent the commonsense knowledge and T represent the task name.
Algorithm 1 Neuro-Symbolic Procedural Planning using Commonsense-Infused Prompting
Require:
Task Sample T , Admissible Step Set S, External Knowledge Graph G; Language Model for Generation LMG and Translation LMT , Symbolic Rule Set R;
Ensure: 1: [Stage1] Semantically parse T into entity set TE ; 2: Maintain top-k task-relevant nodes Ne in TE ; 3: Retrieve subgraph Gs ⊆ Ne ×Rs ×Ne from G ⊆ N ×R×N for each e ∈ TE ; 4: [Stage2] Edge-wise adaption as Êw ← Ew + cosine(nEtail , vtask) and re-rank Ne in TE ; 5: Map the description text of the relationsRs in Gs as Symbolic Rule Set R; 6: Construct procedural prompt PG by verbalizing the re-weighted Gs using R; 7: Translate PG in Admissible Knowledge Prompt P̂G = LMT (PG);
Temporally-extended zero-shot inference for Procedural Plan ST = {S1, ..., Si}: 8: for each timestep i do 9: [Stage3] Aggregate Prompt Pi ← [T ;S0:i−1; P̂G];
10: [Stage4] and [Stage5] Si = LMT (LMG(Pi)); 11: Update Procedural Plan ST ← Si; 12: end for
Stage5:Admissible Step Translation To ensure that the generated procedural plans are grounded to the environment, we should avoid producing the steps that are inadmissible (e.g. Toast the table). In other words, the generated steps should be fully constrained to the admissible composite of action and object in a certain task domain. Thus previous works (Huang et al., 2022; Ahn et al., 2022) have explored using the model (which is LMT in our case) to score a step selected from a fixed set of available options, instead of directly sampling from the output distributions of the language model (which is LMG in our case). Specifically, we match the generated step by LMG to the most similar admissible step in the embedding space encoded by the Translation Language Model LMT . Following (Huang et al., 2022), we utilize a Sentence-Transformer (Reimers & Gurevych, 2019) to calculate the cosine similarity as π(si|x) = LMT (LMG(x)), which translates LMG(x) into the admissible step si ∈ S̄ that is the closest in the embedding space measured by the cosine similarity.
3.3 COUNTERFACTUAL PROCEDURAL DATA CONSTRUCTION
To investigate the counterfactual reasoning ability, we design three families of intervention methods: 1) Initial Configuration: intervene in the initial configuration, such as the location for implementing the task. 2) Intermediate Step, randomly select one step from the ground truth program as an additional constraint of implementing the task and append it to the task name for generating the procedural plan. 3) Final Goal, intervene the task goal as the composite of another randomly sampled task. Table 5 in the Appendix summarizes the category and description. The counterfactual dataset construction details and post-intervention examples are provided in Appendix B.2.
4 EXPERIMENTS
4.1 PROCEDURAL PLANNING SETUP
Datasets We conduct zero-shot experiments on two datasets with procedural information, WikiHow (collected following (Koupaee & Wang, 2018)) and RobotHow (Puig et al., 2018) without training. WikiHow is a large-scale text summarization dataset that is constructed from a human-written knowledge base, involving procedural tasks that spans various topics. We utilize “how to” title as the task names and the summarized headlines as the steps. RobotHow is a large knowledge base of common household tasks collected in the VirtualHome (Puig et al., 2018) simulator. The dataset contains the programs with high-level task names and low-level steps. MT is composed of 292 and 2000 distinct tasks from RobotHow and WikiHow respectively. Human evaluations use randomly sampled 50 task examples for each dataset. Automatic evaluations use 150 and 1000 task examples randomly sampled from RobotHow and WikiHow respectively. Please refer to Appendix B.1 and Appendix B.2 for dataset details.
Baselines We compare our approach with three vanilla generative pre-trained language models (BART, GPT2, and GPT3) and two powerful generation baselines (Zero-shot Planner (Huang et al., 2022) noted as “LLMaP” and Chain of Thought (Wei et al., 2022) noted as “Chain”). More method and configuration details of the models can be found in Appendix B.3 and Appendix B.4.
Metrics We ask human annotators on the Amazon Mechanical Turk platform to rate model performance on two aspects: 1) Coverage: depicts which set of steps can better complete the target task (captures semantic completeness). 2) Order: depicts which sequence covers more steps that are necessary to complete the target task (captures sequential order correctness). In addition, we use Sentence-BLEU (S-BLEU) (Papineni et al., 2002), BERTScore (Zhang* et al., 2020), ROUGE1 (Lin, 2004) and Word Mover’s Distance (WMD) (Kusner et al., 2015) as automatic evaluation metrics. These metrics are used to compute the semantic scores between the annotated programs and the predictions. Details of the crowdsourcing human evaluation can be found in Appendix C.1.
4.2 HUMAN EVALUATION RESULTS WITH COVERAGE AND ORDER METRIC
Each example is rated by 3 crowdsourcing annotators. For the Win-Lose Comparison, we ask the human rater to choose between ours and the baseline LLMaP (Huang et al., 2022). Averaged results reported in Table 1 show that our PLAN is more frequently rated as better for both coverage and order metrics, outperforming baselines over the winning ratio by 21% in coverage and 26% in order, across two datasets. We report the average results of Human Ratings with 5-point Likert scale in Table 2. The consistent performance boost of PLAN indicates the superiority of injecting external commonsense knowledge into the procedural planning task. The performance drop of LLMaP and Chain in the counterfactual setting indicates the vulnerability of fixed holdout knowledge and the pre-defined manual exemplars in causal procedural planning. Please refer to Appendix C.1 for the crowdsourcing human evaluation interface details. Table 3 shows two examples for Qualitative Comparison. More examples can be found in Appendix D.
4.3 AUTOMATICALLY MEASURING THE PROCEDURAL PLANNING
Main Results Table 4 summarizes The automatic evaluation results. PLAN achieves the best results regardless of the architecture of the language model architecture, either autoregressive or autoencoder based. The performance gain of “LLMaP” over “Chain” may probably be due to direct exposure to the holdout task from the dataset. While the “Chain” baseline still outperforms the vanilla baseline that only takes the high-level task name as the prompt. Note that the annotated program is not the
only solution, thus these automatic metrics provide limited absolute performance information. Details for the correlation between automatic metrics and human evaluation can be found in Section 4.5.
Effects of Edge-wise Adaption and Symbolic Program Execution The variant “w/o Adaption” maintains the top-k task-specific nodes ranked by the annotated weight EW in the external knowledge base G without adaption. The variant “w/o Symbolic” directly takes the extracted concept nodes from external knowledge base as prompt. The performance drop of these two variants in Table 4 with significance test in Appendix C.2 demonstrate the importance of adaption and symbolic modules.
Effects of the Large Language Model Architecture We use GPT2 and GPT3 as autoregressive architecture and BART (Lewis et al., 2020) as autoencoder architecture. The autoregressive architecture achieves better results than the autoencoder one. Since the pre-training objective of autoregressivebased GPT is to predict the next token given the previous input tokens. We assume the performance gain of GPT is due to a smaller gap between the objective of pre-training and procedural planning.
Level of Complexity We show report results that use the test set which is separated into several buckets according to the number of steps in the procedural planning task. The step number reflects the difficulty of the task. In Table 7 and Table 8 in Appendix C.2, we show that the averaged performance gain of PLAN over the baselines are consistent or more significant in more complicated procedural planning settings. This indicates the superiority of PLAN in solving long-horizon tasks.
4.4 RESULTS ON COUNTERFACTUAL TASK SAMPLES
We apply Initial Configuration, Intermediate Step, Final Goal interventions on RobotHow and Intermediate Step on WikiHow. Human evaluations under counterfactual setting are summarized in Table 1 and Table 2. PLAN consistently outperforms baselines by a large margin and experiences
a much smaller performance drop compared with the powerful baselines when switching to the counterfactual setting. We assume it’s due to the biased knowledge of the holdout examples and manual exemplars utilized in the baselines, which are vulnerable to counterfactual samples. Automatic evaluations on counterfactual RobotHow are summarized in Table 13 in Appendix C.2. Aligned with human evaluations, PLAN achieves the best performance. The overall poor performance in Final Goal category indicates the challenge for long-horizon and composite procedural planning. While the overall better performance in Intermediate Step category benefits from the intermediate guidance.
4.5 CORRELATION BETWEEN AUTOMATIC AND HUMAN EVALUATION
We evaluate segment-level Pearson Correlation between human and automatic metrics. We observe that BERTScore has a moderate correlation to the human coverage score and WMD has a moderate correlation to the human order score, with 23.3% and 32.3% respectively. Similar to the prior findings (Xu et al., 2021), n-gram-based metrics (Sentence-BLEU and ROUGE) have a relatively weaker correlation to the human coverage score, with a Pearson correlation of 16.4% and 21.1%. Overall, our automatic and human evaluation scores are consistent with the main claim of this paper. However, human evaluation is still irreplaceable for procedural planning at the current stage.
5 RELATED WORK
Procedural Planning Learning to generate procedural plan (Zhang et al., 2020a; Lyu et al., 2021; Zhang et al., 2020b; Chang et al., 2020; Wu et al., 2022; Huang et al., 2022) is important for embodied agentTellex et al. (2011); Jansen (2020); Ahn et al. (2022) and conversational assistants (Ilievski et al., 2018; Yang et al., 2022). Previous work views procedural script learning as a structured form of commonsense knowledge Gupta et al. (2004); Regneri et al. (2010); Wanzare et al. (2016), while more recent work strengthens its association with the changing environments for executable action planning Puig et al. (2018); Shridhar et al. (2020). Some works (Sun et al., 2020; Zhao et al., 2021) explore to utilize human written programs to precisely specify tasks. Our method tackles the problem with aware of cause-effect by utilizing commonsense-infused prompts via a neuro-symbolic approach (Mao et al., 2019; Nye et al., 2021; Yi et al., 2018) for zero-shot procedural planning.
Causality for Language Generation The integration of causality and machine learning has been an intriguing topic for many problems Pearl (2009); Schölkopf (2022). Previous studies focusing on causal inference for natural language understanding Chen et al. (2020); Keith et al. (2020); WoodDoughty et al. (2018) and generating counterfactual text representations Feder et al. (2021). Weber et al. (2020) proposes an intervention method for script learning. However, these methods cannot be directly applied to procedural planning which requires a formal structure. Our method is based on mediation analysis VanderWeele (2015) and causal intervention Pearl (2009); Peters et al. (2017).
Prompt for Large Language Model There is an emerging interest in using prompts to extract knowledge from large language models (Chen et al., 2022; Le Scao & Rush, 2021; Su et al., 2022; Ye et al., 2022; Zhou et al., 2022; Kojima et al., 2022). Cao et al. (2022) treats the prompt as a cause of the task-specific predictor and investigates biases in prompt-based probing evaluations. Chain of thought Wei et al. (2022) discovers that LLM can perform better on reasoning tasks when the prompt is designed as a series of short sentences that mimic the reasoning process of humans.
6 CONCLUSION AND FUTURE WORK
Procedural planning is a newly emerged research area of great importance to various applications, such as household robots and virtual assistants. We propose a neuro-symbolic procedural PLANner (PLAN) with commonsense-infused prompts elicited from the external knowledge base to solve the procedural planning problem in a zero-shot manner without human annotated exemplars. Experiments show the effectiveness of our proposed PLAN under both origin and counterfactual settings, indicating the capability of mitigating spurious correlation by injecting external knowledge in LLMs. Though, procedural planning over long-horizon and composite tasks remains challenging. And exploring multimodal learning and developing human-aligned evaluation metrics are promising future directions in this area.
7 ETHICAL STATEMENT
Given the limited diversified cultural background of the dataset we are using from RobotHow and WikiHow, we assume our results may be biased toward a single cultural background. For instance, given the task ”make breakfeast”, it should take multi-culture into consideration to generate the procedural plans.
8 REPRODUCIBILITY STATEMENT
We provide more data samples and qualitative samples in supplemental materials. In addition, we provide our code implementation at https://anonymous.4open.science/r/PLANNER-7B24 to reproduce our experiments. The Preprocess folder provides the utils to construct the data. The Evaluation folder provides the code for automatic and human evaluation tools. The Planning folder contains the main code for our approach and reproduced planners for procedural planning. The Visualization folder provides the code we use to visualize in the environment.
ACKNOWLEDGMENTS
The research was sponsored by the U.S. Army Research Office and was accomplished under Contract Number W911NF-19-D-0001 for the Institute for Collaborative Biotechnologies. This work was also supported by the National Science Foundation award #2048122. We thank the Robert N.Noyce Trust for their generous gift to the University of California via the Noyce initiative. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
Appendix
Table of Contents
A SCM Theoretical Details 16
A.1 Causal Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 A.2 The Backdoor Adjustment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 A.3 The Front-door Adjustment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
B Implementation Details 19 B.1 Original Dataset Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 B.2 Counterfactual Dataset and Experiment Details . . . . . . . . . . . . . . . . . . 20 B.3 Method Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 B.4 Hyperparameter Search and Configuration Deicision . . . . . . . . . . . . . . . 21 B.5 Computation and Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
C Evaluation Details 21
C.1 Crowdsourcing Human Evaluation . . . . . . . . . . . . . . . . . . . . . . . . 21 C.2 More Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
D Qualitative Examples 29
D.1 Intermediate Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 D.2 Predicted Procedural Plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
E Discussion 34
E.1 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 E.2 Failure Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 E.3 Ethical considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
A SCM THEORETICAL DETAILS
A.1 CAUSAL PRELIMINARIES
The Structural Causal Model (SCM) is a directed acyclic graph (DAG) to describe the causal relationships within a system Pearl (2009). In this paper, we refer to the unrolled SCM along the time dimension as the full temporal causal graph, while the rolled-up version is also called the causal summary graph Peters et al. (2017). In an SCM, if the variable D is a cause of both T and Si, then it is called a confounder. A confounder opens up a backdoor path and causes a spurious correlation between T and Si. The backdoor path is defined as the remaining path between T and Si when all the arrows pointing out of T are removed. Therefore, T ← D → Si is a backdoor path. For our SCM with mediator Pi shown in Figure 4c (same as Figure 2b) from the main paper, there is no backdoor path between T and {Pi, Si−1} because only D → T is left after removing outgoing arrows of T . On the other hand, there is a backdoor path between Pi and Si, i.e. Pi ← T ← D → Si so that Pi indirectly affects the observation of Si through {T, Si−1} and D. The mediator is the variable added between treatment variable (the cause T and Si−1 in our case) and treatment variable (the effect Si in our case), and thus blocks all directed path from the cause to effect ( (Zhang et al., 2016)). The spurious correlations happens when two variables are statistically related but not causally related because of a third variable influences these two variables at the same time or the correlation is coincidental.
To identify the true causal effect between X and Y , we aim to estimate the conditional π(Y |do(X)) after intervention with the do-operator. The do-operator is to break the backdoor path by setting X to a fixed value independent of Z. Then the path Z → X can be removed to eliminate the backdoor paths. In practice, the backdoor adjustment and front-door adjustment are two fundamental methods to implement interventions and obtain the conditional π(Y |do(X)). Clarity of the Definition As a language prompt, Pi inherits the content from Pi−1 and thus can be detached from steps before Si−1 for simplicity.
Causal Intervention There are two types of operation to control the confounding bias: the backdoor adjustment and the front-door adjustment (Pearl, 2009). The backdoor adjustment is intractable in our case because it requires the prior distribution of the confounding variables. On the other hand, we can construct an input prompt as a mediator Pi for T → Si and Si−1 → Si. Then the front-door adjustment applies a two-step do-operation to mitigate bias by investigating P → Si (Pearl, 2009). Specifically, we construct the prompt mediator Pi using techniques illustrated in Section 2.2.
The pre-trained knowledge (D) in LLMs confounds language models to make biased decisions toward an unreasonable action. Since the confounder is unobservable, intervention techniques such as back-door (definition in Appendix A.2) adjustment (Hu & Li, 2021; Weber et al., 2020; Yue et al., 2020) are not applicable in our SCM. Instead, we build a mediator and implement it as a commonsense-infused prompt. Through the mediator, we can identify causal effects among goals and steps by investigating the indirect effect from the goals, which is essentially the front-door adjustment (definition in Appendix A.3) in causality (Pearl, 2009).
A.2 THE BACKDOOR ADJUSTMENT
The backdoor adjustment is one way to realize the intervention do(T = t) by considering the conditional probability over the existing data distribution with observed confounder D. Let πi denote π(·|Pi−1) that represent the probability density function conditioned on Pi−1. It calculates the average causal effects by considering all stratums of the dataset:
πi(Si|do(T )) = ∑ d πi(Si|T,D = d)πi(D = d) (5)
However, for LLMs, the pretraining data is usually unobservable and has been transformed as knowledge incorporated into the hidden space. Therefore, we are not able to directly apply the backdoor adjustment.
A.3 THE FRONT-DOOR ADJUSTMENT
The front-door adjustment is another technique to apply intervention by introducing a mediator Pi when the confounder is unobservable. As is explained in Section 2.2 from the main paper, the front-door adjustment is equivalent to two consecutive do-operations on task T and prompt Pi. We first investigate the generation of S1 and then expand it to St.
Timestep i = 1 As is shown in Figure 4a, since there is no preceding steps, the first step generation involves D, T and P1 only. Similar to the proof in Section 2.2 from the main paper, we have:
πi(S1|do(T )) = ∑ p πi(S1|do(P1 = p))πi(p|do(T ))
= ∑ p πi(p|T ) ∑ t πi(Si|p, T = t)πi(T = t) (6)
By adding intervention to T , we make the value of do(T = t) independent of the confounder D at the beginning. The backdoor path through D → T is eliminated as a result.
Timestep i > 1 As is shown in Figure 2a from the main paper, we model the mediator P1 as an effect of three variables, T , Pi−1 and Si−1. The first step of our front-door adjustment is to apply the do-operator on the three variables and observe the change in Pi as explained in Section 2.2 from the main paper. Since there are no backdoor paths between Pi and these variables, we have the probability after intervention equal to the conditional probability without intervention:
πi(Pi = p|do(T )) = πi(Pi = p|T ) (7) πi(Pi = p|do(Pi−1)) = πi(Pi = p|Pi−1) (8) πi(Pi = p|do(Si−1)) = πi(Pi = p|Si−1) (9)
The second step is to apply do-operator on Pi and then identify the causal effect as: πi(Si|do(Pi)) = ∑ t,p′,s ( πi(Si|Pi, T = t, Pi−1 = p′, Si−1 = s)
πi(T = t, Pi−1 = p ′, Si−1 = s) ) (10) Combining Equation7-9 and Equation 10, we have the front-door adjustment. Note that there are three backdoor paths from each of the variables T , Pi−1, and Si−1, as is shown in Figure 4b (drawn
in blue, red and purple). More importantly, the one through T , i.e. Pi ← T ← D → Si (the blue path in Figure 4b) and the one through Pi−1, i.e. Pi ← Pi−1 ← T ← D → Si (the red path in Figure 4b) shares the same subpath. The intervention on the task T breaks the backdoor paths for both T and Pi−1. Therefore, we have our front-door adjustment as
πi(Si|do(Si−1),do(Pi−1), do(T )) (11) = ∑ p πi(Si|do(Pi = p))πi(p|do(Si−1), do(Pi−1), do(T )) (12)
= ∑ p πi(Si|do(Pi = p))πi(p|do(Si−1), Pi−1, do(T )) (13)
= ∑ p πi(Si|do(Pi = p))πi(p|do(Si−1), do(T )) (14)
= ∑ p πi(p|Si−1, T ) ∑ s,t πi(Si|p, Si−1 = s, T = t)πi(Si−1 = s, T = t) (15)
= πi(Si|do(Si−1), do(T )) (16)
We have Equation 13 because of the intervention on T and Rule 2 (Pearl, 1995), Equation 14 because of Rule 1 (Pearl, 1995). After simplification based on Equation 12-16, we get the SCM at timestep i > 1 in Figure 4c. This is an equivalent SCM after eliminating Pi−1 in Figure 4b. The reason we could eliminate Pi−1 is as follows. We follow a common method of constructing temporally-extended prompt, which is to append the prediction at previous timesteps to the prompt at current timestep. In our case, the PG,i is the same as PG,i−1, thus Pi inherit part of the content from Pi−1, the change only depend on the Si−1. Thus Pi−1 and Si−2 are fixed, and there is no need to predict Pi−1 at timestep i again. In this way, we simplify the causal graph in Figure 4b to the one in Figure 4c. In summary, we define and simplify the causal graph based on the temporal-extended property of our prompt construction (Pi inherit the content from Pi−1). We end up with Equation 14-16 which is shown as Equation 3 in Section 2.2 from the main paper.
B IMPLEMENTATION DETAILS
B.1 ORIGINAL DATASET DETAILS
RobotHow This dataset is Attribution-NonCommercial-ShareAlike 4.0 International Creative Commons License. We evaluate the inference of 150 tasks by random selection from the dataset. Each program contains the task name, task description and steps. We use the task name and sequence of steps as our input and output references.Each step is a composition of [Action], [Object] and [Number]. For example, the sequence of steps of the task ”Watch TV” are: 1. [Walk] <TELEVISION> (1) 2. [SwitchOn] <TELEVISION> (1) 3. [Walk] <SOFA> (1) 4. [Sit] <SOFA> (1) 5. [Watch] <TELEVISION> (1).
WikiHow This dataset2 is under an Attribution-Noncommercial-Share Alike 3.0 Creative Commons License. And the text content is free to modify, republish and share. We evaluate the inference of 1000 tasks by random selection from the dataset. The admissible action space and interaction object space are more complex than the programs in RobotHow. And there is no fixed ”[Action] ¡Object¿ (Number)” form of each step. For each article, it contains the title, the bold headlines and text. We utilize the title and headlines as our task name and steps respectively.
External Knowledge Base For the external knowledge base, we utilize ConceptNet to leverage commonsense reasoning ability to help ground language generation in goal-guided procedural text generation. ConceptNet (Speer et al., 2017) captures commonsense knowledge explicitly with triplets of (head node, relation, end node). It contains 799, 273 nodes and 2, 487, 810 edges that represent both symmetric and asymmetric relations. Specifically, the core relations we utilized are Synonym, AtLocation, CapableOf, Causes, CausesDesire, HasPrerequisite, HasSubevent, and UsedFor. Since we are looking at the commonsense knowledge in house-holding tasks, so we filter out the relations (/r/DistinctFrom, /r/DerivedFrom, /r/SymbolOf, /r/EtymologicallyRelatedTo, /r/EtymologicallyDerivedFrom) that are related to the linguistic.
2https://www.wikihow.com
B.2 COUNTERFACTUAL DATASET AND EXPERIMENT DETAILS
Table 6 show the examples that compare the original program and the counterfactual program of each intervention method are also provided. Specifically, for Initial Configuration, we randomly append the location to a given task name to constrain the location of completing the task. The steps are prepended with the initial step ”walk to ¡Location¿”. For Intermediate Step, we randomly sampled a step from the task-specific program and append it to the task name to constrain the way to implement a given task. For Final Goal, we randomly combine two tasks by combining both the task names and the programs to construct a set of long-horizon composite tasks.
We conduct counterfactual experiments by applying randomly selected intervention methods over RobotHow. And we only apply the Intermediate Step intervention method over WikiHow due to the loose configuration requirement and the long text of the WikiHow contents. Note that the performance gain of PLAN under the counterfactual setting mainly comes from the additional guidance of the task introduced from the Intermediate Step intervention method. However, the baselines mostly experience performance drops due to the limited annotated exemplars. PLAN consistently outperforms baselines by a large margin, indicating its superiority under the counterfactual setting.
B.3 METHOD DETAILS
The existing formalization of the procedural planning task can be mainly categorized as 1) sequential choice making (Lyu et al., 2021; Wu et al., 2022; Zhang et al., 2020a;b), which reasons about the next step from the options given, the task, and previous steps; 2) conditioned generation (Huang et al., 2022; Ahn et al., 2022), which generates the temporally extended plans to implement the task. We study the procedural planning task as the conditioned generation problem (Huang et al., 2022; Ahn et al., 2022) since it resembles real-world scenarios.
Baselines LLMaP propose a procedure to extract temporally extended plans from large pre-trained language models. Chain explores manually creating exemplars that mimic the reasoning process
and uses them to prompt large language models for reasoning tasks. To compare with Chain on the procedural planning task, we manually generate exemplars that contain the chain of thought for 1% of the inference task programs. Note that for the BART language model, we use BART-large version. And we use the 1.5 billion parameter GPT-2 (aka gpt2-xl). For the translation model LMT , we use sentence-transformers (RoBERTa-large). All these models are released by HuggingFace. In addition, our experiments with GPT3 (davinci) use OpenAI API (May, 2022).
External Knowledge Graph Conceptnet5 define a set of 34 relations (3). Within the relations we consider in the procedural planning task, the averaged sampling time of subgraph sampling is 0.03576 milliseconds per task program.
B.4 HYPERPARAMETER SEARCH AND CONFIGURATION DEICISION
We perform a hyperparameter search for all evaluated methods for the following hyperparameters.
• The confidence threshold θ, which terminate the generation when below it, is searched in {0, 0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8}.
• The steps horizon, which constrains the maximal number of procedural planning steps, is searched in {10, 20, 40}.
• The number of hops for retrieving the subgraph from the external knowledge base is searched in {1, 2, 3}.
• The ratio of maximal concepts to the length of the task name is searched in {1, 2, 3}. • The cosine similarity threshold for keeping the task-specific concept is searched in {0.4, 0.6, 0.8}.
• The edge weight threshold θe is searched in {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8}. • The top-k task-specific nodes value is searched in {1, 5, 10, 15, 20, 25, 50, 100}.
The configurations used in the experiments are: θ=0.7, 20 step horizon, 3 hops, 3 ratio of concepts to task length, cosine similarity threshold 0.4, θe=0.6 and k=10.
We empirically choose the hop number H as 3 considering both the input length limit of the LLMs and the fact that 3-hop contains reasonable relevant information in practice (Zhang et al., 2022).
B.5 COMPUTATION AND RESOURCES
We use one single NVIDIA A100 GPU Server for all the experiments. Since there is no training in our zero-shot settings, the computation is only used for the inference stage of the experiments.
C EVALUATION DETAILS
C.1 CROWDSOURCING HUMAN EVALUATION
We conduct all the human evaluations (rating and win-lose comparison) on Amazon Mechanical Turk platform. Each example is rated by 3 annotators. We ask Amazon Mechanical Turk workers, for every assignment, to evaluate the quality of the provided low-level steps given the high-level task description. For the Win-Lose Comparison, they were asked to choose one from the two provided model generated results by 1:the first one is better, 2:equal and 3:the second one is better. For the Human Ratings, they were asked to score each sample with 5-point Likert scale. This process does not involve collecting any personal information. And we manually check no offensive content is produced by the models.
The assignment layout templates for workers are shown in Figure 7 and Figure 6. Specifically, we evaluate randomly selected 50 task examples from each dataset (RobotHow and WikiHow) under all the settings (standard and counterfactual). We only collect the examples that the workers read the instructions carefully by checking whether they give 1 score for the empty program as a sanity check. The hourly wage paid to participants is estimated $9. And the total amount spent on participant
3https://github.com/commonsense/conceptnet5/wiki/Relations
compensation is $1296. The details of the Human Intelligence Tasks process are described in the following sections.
C.1.1 WIN-LOSE COMPARISON
During the process of Human Intelligence Tasks, the workers are shown the following instructions: Read the given task and the sequence of steps, determine which set of steps can better complete the target task. In other words, can the task be decomposed into these steps? Please consider the sequential order of the steps.
Then the program to be evaluated is provided as:
Question Task: Study
Sequence 1:: Step 1: Walk to textbook Step 2: Read book Step 3: Walk to book
Sequence 2:: Step 1: Walk to home office Step 2: Find desk
Finally, the workers are asked to score the program by following the instructions below: Select an option: 1 - Sequence 1 is better; 2 - Tie; 3 - Sequence 2 is better
The above example is to evaluate the order metric, for the coverage metric, the same process are conducted, except for the instructions are: Read the given task and the sequence of steps, and determine which sequence covers more steps that are necessary to complete the target task. Please ignore the sequential order of the steps.
C.1.2 HUMAN RATINGS
Similar as the Win-Lose Comparison Human Intelligence Tasks, the workers are shown the following instructions: For every question below, determine whether the task can be completed in any reasonable scenario using the provided steps (Please consider the sequential order of the steps.). You could directly give the lowest score (1) for the empty steps. In other words, can the task be decomposed into these steps? (Please consider the sequential order of the steps.)
Then the program to be evaluated is provided as:
Question Task: Write an email
Sequence of Steps: Step 1: Walk to home office Step 2: Walk to computer Step 3: Find computer Step 4: Turn to computer Step 5: Look at computer Step 6: Walk to computer Step 7: Find chair Step 8: Sit on chair Step 9: Find keyboard Step 10: Grab keyboard Step 11: Find mouse Step 12: Grab mouse Step 13: Type on keyboard
Finally, the workers are asked to score the program by following the instructions below: Use the slider below to indicate how much you agree with the following statement (1 = Strongly disagree, 5 = Strongly agree). If ”sequence of steps” are blank, please directly choose 1 (lowest score). The task can be completed in any reasonable scenario using the provided steps. [SLIDER PROVIDED
HERE]
The above example is to evaluate the order metric, for the coverage metric, the same process is conducted, except for the instructions are: For every question below, determine whether the task can be completed in any reasonable scenario using the provided steps (Please ignore the sequential order of the steps.). You could directly give the lowest score (1) for the empty steps. In other words, can the task be decomposed into these steps? (Please ignore the sequential order of the steps.)
C.2 MORE RESULTS
Significance Test We provide paired-t test (p¡0.05) statistics results for Table 2. On RobotHow, our PLAN significantly outperforms all baselines on Original-Order(BART) and CounterfactualCoverage(GPT2). On WikiHow, our PLAN significantly outperforms all baselines on OriginalCoverage(BART, GPT2), Counterfactual-Coverage(BART, GPT2), and Counterfactual-Order(BART). For the coverage metric under the counterfactual setting, the human-provided program is not significantly better than our PLAN.
We also conduct the paired-t test (p¡0.05) statistics results over the variant “w/o Adaption” and “w/o Symbolic”. Compared with the full model PLAN, the variants experienced a statistically significant
performance drop. Especially on BERTScore-f1, the p-value is 8.884e−13 and 1.4e−8 respectively. This further confirms the importance of the modules.
Results on GPT-3 In addition, we conduct experiments with GPT-3 (davinci version) using OpenAI API. We showcase the comparison in Table 9 and Table 10.
Motivation of Evaluation Metrics Since the nature of the procedural planning task can be opendomain in that the golden plans may not be unique. This leads to the challenge that common automatic metrics proposed in natural language task are not perfect to evaluate procedural planning. The same observations of such challenge to directly judge the system using automatic metrics are discussed in LLMaP(Huang et al., 2022) as well. We assume that the human evaluation on Coverage and Order can reflect how well the procedural plans are close to human annotated program, because the human annotators are required to determine whether the task can be completed in any reasonable scenario using the procedural plans explicitly. Thus we provide both the automatic evaluation and human evaluation on two aspects Coverage and Order, with description in the Metrics paragraph in Section 4.1.
Evaluation on Success Rate Metric To make human evaluations more intuitive, we provide an additional Success Rate metric to show whether the procedural plans can successfully implement the task, which focus more on the success rate instead of the coverage or the order of the plans. We show the Success Rate evaluations on the baselines and our method in Table 11. The assignment layout template for workers is shown in Figure 8.
More Ablation To verify the contribution of the first translation language model LMT that translates the knowledge prompt PG into admissible one P̂G, we conduct an additional ablation experiment by simply removing the first LMT and replacing P̂G with PG to prompt the LLM for procedural planning. We provide results with comparisons to other ablations in Table 12.
Results on Counterfactual Task Samples We show automatic evaluation results on counterfactual RobotHow in Table 13.
D QUALITATIVE EXAMPLES
D.1 INTERMEDIATE OUTPUT
We provide running examples with intermediate output for each module in the following paragraph. First, we show the intermediate output of input task T , the subgraph Gs depicted in the tuple of the start node, relation type, tail node and edge weight, the knowledge prompt PG and the translated one P̂G as below:
• Input task T : Take shower.
• Human-annotated Plan Reference: Step 1: Walk to bathroom. Step 2: Walk to clothes dress. Step 3: Find clothes dress. Step 4: Put off clothes dress. Step 5: Find shower. Step 6: Enter shower. Step 7: Find soap. Step 8: Grab soap. Step 9: Scrub soap. Step 10: Put back soap. Step 11: Leave shower. Step 12: Find towel. Step 13: Grab towel. Step 14: Wipe towel. Step 15: Find clothes dress. Step 16: Put on clothes dress.
• Task-relevant subgraph Gs(Nhead, Re, Ntail, Ew): (take a shower, HasLastSubevent, dry off, 6.0); (bathe, HasLastSubevent, dry off, 6.0); (take a shower, HasPrerequisite, take out your clothes, 4.47); (take a shower, HasSubevent, get clean, 4.47); (take a shower, HasPrerequisite, take your clothes off, 3.46); (go to a party, HasPrerequisite, take a shower, 2.82); (play lacrosse, HasLastSubevent, take a shower, 2.82); (get clean, HasPrerequisite, take a shower, 2.82); (take a shower, MotivatedByGoal, wash your hair, 2.82); (play sports, HasLastSubevent, take a shower, 2.82); (go to the hairdresser, HasPrerequisite, take a shower, 2.82); (take a shower, HasPrerequisite, turn on the water, 2.0); (have a bath, HasLastSubevent, dry off, 2.0); (get wet, HasSubevent, dry off, 2.0); (become more clean, HasLastSubevent, dry off, 2.0); (take a shower, HasSubevent, wash your hair, 2.0); (take a shower, HasLastSubevent, turn off the water, 2.0); (become more clean, HasLastSubevent, dry off, 2.0); take a shower, HasLastSubevent, put your clothes on, 1.0); (take a shower, HasSubevent, use shampoo, 1.0); (take a shower, HasSubevent, wash behind your ears, 1.0); (take a shower, HasSubevent, wash your body, 1.0); (take a shower, HasPrerequisite, go to the bathroom, 1.0); (take a shower, HasPrerequisite, go to the bathroom and undress, 1.0); (take a shower, HasPrerequisite, step into the shower, 1.0); (take a shower, HasPrerequisite, soap up, 1.0); (take a shower, HasPrerequisite, bring some soap, 1.0); (take a shower, HasPrerequisite, bring some shampoo, 1.0); (take a shower, HasLastSubevent, towel yourself off, 1.0); (take a shower, HasPrerequisite, remember to take your towel, 1.0);
• Knowledge prompt PG: Step: take out your clothes. Step: set clean clothes. Step: go to the bathroom. Step: go to the bathroom and undress. Step: take your clothes off. Step: turn on the water. Step: bring some soap. Step: bring some shampoo. Step: remember to take your towel. Step: get clean. Step: wash your hair. Step: use shampoo. Step: wash behind your ears. Step: wash your body. Step: turn off the water. Step: dry off. Step: become more clean. Step: put your clothes on.
• Translated knowledge prompt P̂G: Step: find clothes underwear. Step: wash shower. Step: walk to bathroom. Step: look at shower. Step: take off clothes underwear. Step: wash shower. Step: find soap. Step: grab soap. Step: scrub shower. Step: find shampoo. Step: grab shampoo. Step: find towel. Step: wipe shower. Step: wash hair. Step: wipe shampoo. Step: scrub shower. Step: wash body. Step: switch off washing machine. Step: scrub shower. Step: wipe shower. Step: put on clothes underwear.
• Generated plan ST : Step 1: Find clothes underwear. Step 2: Walk to Bathroom. Step 3: Take off clothes shirt. Step 4: Scrub shower. Step 5: Pour shampoo into hair. Step 6: Wipe shampoo. Step 7: Wipe hair. Step 8: Wash body. Step 9: Find Towel. Step 10: Put on clothes underwear.
D.2 PREDICTED PROCEDURAL PLANS
More qualitative examples of final predicted procedural plans that are randomly selected are provided. Table 14 show random samples on the original dataset. Table 15 show random samples on the counterfactual datasets with the Intermediate Step intervention method. And Table 16 shows random
samples on the counterfactual RobotHow with the Initial Configuration and Final Goal intervention methods.
Model Program RobotHow Task: Play Games
Human Step 1: Walk to bedroom.Step 2: Walk to video game console. Step 3: Find video game console.Step 4: Walk to video game console. Step 5: Switch on video game console.Step 6: Find television. Step 7: Switch on television.Step 8: Find video game controller. Step 9: Grab video game controller.Step 10: Find chair. Step 11: Walk to chair.Step 12: Sit on chair
Chain Step 1: Put chef knife on water glass.Step 2: Find sink.
LLMaP Step 1: Walk to bedroom.Step 2: Walk to video game console.Step 3: Find video game console
Model Program RobotHow Initial Configuration Task: Play games in dining room.
Human Step 1: Walk to dining room.Step 2: Walk to video game console. Step 3: Find video game console.Step 4: Walk to video game console. Step 5: Switch on video game console.Step 6: Find television. Step 7: Switch on television.Step 8: Find video game controller. Step 9: Grab video game controller.Step 10: Find chair. Step 11: Walk to chair.Step 12: Sit on chairTask: Play games in dining room. Step 1: Walk to video game controller.Step 2: Put video game controller on diningtable. Step 3: Put boardgame on kitchen table.Step 4: Put boardgame on diningtable.
Chain Step 1: Walk to dining room.Step 2: Walk to sauce pan. Step 3: Pour bottle water into dish bowl.Step 4: Walk to water. Step 5: Walk to carrot.Step 6: Walk to food salt.
E DISCUSSION
E.1 LIMITATIONS
Though pointing out a direction to prompt out actionable knowledge in large-scale pre-trained language models with external commonsense knowledge, the limitations of reasoning long-horizon procedural plan still exist. Existing datasets for procedural planning like WikiHow and RobotHow are all monolingual supporting only English goals and plans. In the future, it is important to expand these datasets or having novel datasets that support multiple languages used across the world. The inherent difference between these languages may also result in different planning strategies in granularity or abstraction levels, which is potentially challenging. In addition, the long-horizon and complex composite tasks still remain challenging for the existing procedural planners.
Above limitations are discussed mainly based on the challenges of procedural planning task. In addition, there are limitations of our implementation that are guided by our causal analysis. First, the coverage of the leveraged external resources is limited, which is common in a knowledge-enhanced system. This may result in the wrong understanding of the task and produce not reasonable procedural plans. For example, the knowledge of the word ”Turking”, which refers to ”The act or process of performing small tasks using the Amazon Mechanical Turk service.” according to Wiktionary, is not covered in the external resources (e.g., ConceptNet). Since our proposed system does not assume specific external resources. It is plausible in the future if we utilize more powerful external resources (e.g., Wiktionary). Second, the hop number and the threshold of the multi-hop retrieval in taskrelevant subgraph sampling is currently a configured hyperparameter. This may result in not ideally constructed prompt. The future work could instead make these hyperparameters learnable on each task domain, and also explore the pros and cons between end-to-end commonsense-infused prompt versus neuro-symbolic constructed prompt.
E.2 FAILURE ANALYSIS
We discuss detailed failure modes and examples with analyses below. For example, the predicted procedural plan on task ”Turking”, which refers to ”The act or process of performing small tasks using the Amazon Mechanical Turk service.” according to Wiktionary. We compare the predicted procedural plan on this task among baselines and our method: (1) The ground truth plan is ”Task: Turking. Step 1: Walk to home office.Step 2: Walk to desk.Step 3: Find chair.Step 4: Sit on chair.Step 5: Find computer.Step 6: Switch on computer” (2) The plan predicted by Chain baseline is empty. (3) The plan predicted by LLMaP baseline is ”Task: Turking. Step 1: Put teddybear on oven.” (4) Our prediction is ”Task: Turking. Step 1: Eat food turkey. Step 2: Drink water. Step 3: Sleep.” We can see that for the ”out-of-knowledge” task, our method also lead failure planning. We assume this is mainly due to the limited knowledge in external resources, as discussed in the Appendix E.1, and this main failure mode can be avoided by introducing larger external resources (e.g, Wiktionary), similar as other knowledge-enriched methods.
E.3 ETHICAL CONSIDERATIONS
We hope to de-bias the procedural planning to avoid misleading either humans or robots with daily life instructions, which may result in unsafe situations. The cultural bias behind these datasets can be a critical issue for future work. As the ground truth planning steps usually reflect the culture shared by the English-speaking group, other cultures may have a completely different practical consideration that leads to different orders of these steps or even novel steps that are not proposed by the LLMs we utilized in this paper. In the future, we will consider cultural bias as a proxy variable so that we could adjust the implicit knowledge from LLM or commonsense from external sources according to the different needs of cultural backgrounds. | 1. What is the main contribution of the paper regarding procedure planning?
2. What are the strengths and weaknesses of the proposed model in terms of technical soundness, complexity, and ablation?
3. Do you have any concerns regarding the methodology and implementation of the SCM?
4. How does the paper handle the issue of relational biases and logical order in LLMs?
5. Can the proposed model generate concrete procedural results to help understand each part of the model?
6. What are your suggestions for improving the paper's clarity and presentation? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper presents a procedure planning model which enforces the logical order. The authors claim that they can parse the goal to subgraphs and translate the knowledge into admissible knowledge. The proposed model also learns casual relationships via an SCM for procedural planning. Experimental results show the proposed model surpasses the original baselines by a large margin.
Strengths And Weaknesses
Strengths:
The procedural planning task is interesting and fundamental. Through procedural planning, we can understand how the neural networks understand the task structure and generate the corresponding plans. The counteract cases are also interesting and touch the core of AI.
The source code is provided, so this work might be reproducible.
The idea of using causal models to prevent relational biases and keeping order in LLMs is very important and is introduced with good examples.
The improvements in the experiments seem to be large. I like the illustration of the results introduced in Table 5. Sadly, some results are omitted in Table 5. On the contrary, I suggest the paper should have a large table showing the concrete procedural results to help understand each part of the model. Try always to be concrete and precise.
Weaknesses:
I am confused by this paper's methodology a lot of times. In general, this paper could be challenged in the following positions:
Probably, the proposed model is not technically sound. In fact, many of the components are not well-defined, and the system looks too complicated. The overall model is not ablated well and is full of tricks.
The proposed model seems to borrow a lot of big concepts, such as neural-symbolic or casual models. However, the specific technical contributions are not very clear, and many problems remain unaddressed. In general, the system looks too complicated. This is not a good paper. A good paper should stick to one major point and show the advantage of this point over previous baselines.
Human evaluations are not very objective and cannot be easily reproduced. I fail to see a clear motivation for using expensive human evaluations. Instead, showing a lot of generated procedures is far more beneficial than showing a lot of numbers.
The specific comments are:
In the task definition, what are the actions, and what are the object sets? I think in real natural language, the set of actions (described in natural language) or objects can be extremely large. How can the formulation handle this? I did not see the specific usage of these variables. Probably the problem is over-formulated.
The authors use the same variable D for task D_T and confounding variable D. This is hard to decode.
I can hardly understand the implementation of the SCM. What are the input and output of the SCM? Is there any replacement (e.g. standard transformers) for this module? What is the sample output of the SCM? Is there any reference to the SCM? Take Figure 2 as an example; how does the SCM forward the reasoning path? How does the SCM encode the action and object? How does the SCM produce the output? These problems are still unclear to me.
Why does procedural planning need so many steps (five-stage pipelines)? I see that motivation is to fuse different motivations (task definition, previous steps, and external knowledge). If so, why not encode this information in parallel so that the fused (e.g. concatenation) information can be used in downstream tasks?
Algorithm 1 is a waste of space, and it is better to replace it with a figure (or make figure 3 better).
Many concrete designs, such as the symbolic executors, are missing. It seems that the symbolic executors are trivial and can be easily learned. In other words, I do not think the "neural-symbolic" stuff in the title plays an important role in this system. In fact, what is neural symbolic in this line of work? Why is it important? How to connect the big concept with the experimental results with the LLMs? These questions are far more important to show than the ad-hoc model designs.
Clarity, Quality, Novelty And Reproducibility
As we have discussed in the prior section, reproducibility is good. In overall, the novelty should be good because introducing casual models and neural symbolic methods should still be novel. The major limitation is the clarity and the paper presentation. I tend to the acceptance part considering the good directions and good ideas. However, if I do not see a major revision in the rebuttal phase in the writing parts clarifying the related concerns, I might decide to reject this paper. |
ICLR | Title
Stochastic Weight Averaging in Parallel: Large-Batch Training That Generalizes Well
Abstract
We propose Stochastic Weight Averaging in Parallel (SWAP), an algorithm to accelerate DNN training. Our algorithm uses large mini-batches to compute an approximate solution quickly and then refines it by averaging the weights of multiple models computed independently and in parallel. The resulting models generalize equally well as those trained with small mini-batches but are produced in a substantially shorter time. We demonstrate the reduction in training time and the good generalization performance of the resulting models on the computer vision datasets CIFAR10, CIFAR100, and ImageNet.
1 INTRODUCTION
Stochastic gradient descent (SGD) and its variants are the de-facto methods to train deep neural networks (DNNs). Each iteration of SGD computes an estimate of the objective’s gradient by sampling a mini-batch of the available training data and computing the gradient of the loss restricted to the sampled data. A popular strategy to accelerate DNN training is to increase the mini-batch size together with the available computational resources. Larger mini-batches produce more precise gradient estimates; these allow for higher learning rates and achieve larger reductions of the training loss per iteration. In a distributed setting, multiple nodes can compute gradient estimates simultaneously on disjoint subsets of the mini-batch and produce a consensus estimate by averaging all estimates, with one synchronization event per iteration. Training with larger mini-batches requires fewer updates, thus fewer synchronization events, yielding good overall scaling behavior.
Even though the training loss can be reduced more efficiently, there is a maximum batch size after which the resulting model tends to have worse generalization performance (McCandlish et al., 2018; Keskar et al., 2016; Hoffer et al., 2017; Golmant et al., 2018; Shallue et al., 2018). This phenomenon forces practitioners to use batch sizes below those that achieve the maximum throughput and limits the usefulness of large-batch training strategies.
Stochastic Weight Averaging (SWA) (Izmailov et al., 2018) is a method that produces models with good generalization performance by averaging the weights of a set of models sampled from the final stages of a training run. As long as the models all lie in a region where the population loss is mostly convex, the average model can behave well, and in practice, it does.
We have observed that if instead of sampling multiple models from a sequence generated by SGD, we generate multiple independent SGD sequences and average models from each, the resulting model achieves similar generalization performance. Furthermore, if all the independent sequences use small-batches, but start from a model trained with large-batches, the resulting model achieves generalization performance comparable with a model trained solely with small-batches. Using these observations, we derive Stochastic Weight Averaging in Parallel (SWAP): A simple strategy to accelerate DNN training by better utilizing available compute resources. Our algorithm is simple to implement, fast and produces good results with minor tuning.
For several image classification tasks on popular computer vision datasets (CIFAR10, CIFAR100, and ImageNet), we show that SWAP achieves generalization performance comparable to models trained with small-batches but does so in time similar to that of a training run with large-batches. We use SWAP on some of the most efficient publicly available models to date, and show that it’s
∗Equal contribution †Work done during an internship at Apple Inc.
able to substantially reduce their training times. Furthermore, we are able to beat the state of the art for CIFAR10 and train in 68% of the time of the winning entry of the DAWNBench competition.1
2 RELATED WORK
The mechanism by which the training batch size affects the generalization performance is still unknown. A popular explanation is that because of the reduced noise, a model trained using larger mini-batches is more likely to get stuck in a sharper global minima. In (Keskar et al., 2016), the authors argue that sharp minima are sensitive to variations in the data because slight shifts in the location of the minimizer will result in large increases in average loss value. However, if flatness is taken to be the curvature as measured by the second order approximation of the loss, then counterexamples exist. In (Dinh et al., 2017), the authors transform a flat minimizer into a sharp one without changing the behavior of the model, and in (Li et al., 2018), the authors show the reverse behavior when weight-decay is not used.
In (McCandlish et al., 2018), the authors predict that the batch size can be increased up to a critical size without any drop in accuracy and empirically validate this claim. For example, the accuracy begins to drop for image classification on CIFAR10 when the batch sizes exceed 1k samples. They postulate that when the batch size is large, the mini-batch gradient is close to the full gradient, and further increasing the batch size will not significantly improve the signal to noise ratio.
In (Hoffer et al., 2017), the authors argue that, for a fixed number of epochs, using a larger batch size implies fewer model updates. They argue that changing the number of updates impacts the distance the weights travel away from their initialization and that this distance determines the generalization performance. They show that by training with large-batches for longer times (thus increasing the number of updates), the generalization performance of the model is recovered. Even though this large-batch strategy generates models that generalize well, it does so in more time than the smallbatch alternative.
Irrespective of the generalization performance, the batch size also affects the optimization process. In (Ma et al., 2017), the authors show that for convex functions in the over-parameterized setting, there is a critical batch size below which an iteration with a batch size of M is roughly equivalent to M iterations with a batch size of one, and batch-sizes larger than M do not improve the rate of convergence.
Methods which use adaptive batch sizes exist (Devarakonda et al., 2017; Goyal et al., 2017; Jia et al., 2018; Smith et al., 2017; You et al., 2017). However, most of these methods are either designed for specific datasets or require extensive hyper-parameter tuning. Furthermore, they ineffectively use the computational resources by reducing the batch size during part of the training.
Local SGD (Zhang et al., 2016; Stich, 2018; Li et al., 2019; Yu et al., 2019) is a distributed optimization algorithm that trades off gradient precision with communication costs by allowing workers to independently update their models for a few steps before synchronizing. Post-local SGD (Lin et al., 2018) is a variant, which refines the output of large-batch training with local-SGD. The authors have observed that the resulting model has better generalization than the model trained with large-batches and that their scheme achieves significant speedups. In this manner Post-local SGD is of a very similar vein than the present work. However, while Post-local SGD lets the models diverge for T iterations where T is in the order of tens, SWAP averges the models once after multiple epochs. For example, in our Imagenet exeperiments (see Sec. 5) we average our models after tens of thousands of updates, while Post-local SGD does after at most 32. Because of this difference, we believe that the mechanisms that power the success of SWAP and Post-local SGD must be different and point to different phenomena in DNN optimization.
Stochastic weight averaging (SWA) (Izmailov et al., 2018) is a method where models are sampled from the later stages of an SGD training run. When the weights of these models are averaged, they result in a model with much better generalization properties. This strategy is very effective and has been adopted in multiple domains: deep reinforcement learning (Nikishin et al.), semisupervised learning (Athiwaratkun et al., 2019), Bayesian inference (Maddox et al., 2019), lowprecision training (Yang et al., 2019). In this work, we adapt SWA to accelerate DNN training.
1The https://dawn.cs.stanford.edu/benchmark/
3 STOCHASTIC WEIGHT AVERAGING IN PARALLEL
We describe SWAP as an algorithm in three phases (see Algorithm 1): In the first phase, all workers train a single model by computing large mini-batch updates. Synchronization between workers is required at each iteration and a higher learning rate is used. In the second phase, each worker independently refines its copy of the model to produce a different set of weights. Workers use a smaller batch size, a lower learning rate, and different randomizations of the data. No synchronization between workers is required in this phase. The last phase consists of averaging the weights of the resulting models and computing new batch-normalization statistics to produce the final output.
Phase 1 is terminated before the training loss reaches zero or the training accuracy reaches 100% (for example, a few percentage points below 100%). We believe that stopping early precludes the optimization from getting stuck at a location where the gradients are too small and allows the following stage to improve the generalization performance. However, the optimal stopping accuracy is a hyper-parameter that requires tuning.
During phase 2, the batch size is appropriately reduced and small-batch training is performed independently and simultaneously. Here, each worker (or a subset of them) performs training using all the data, but sampling in different random order. Thus, after the end of the training process, each worker (or subset) will have produced a different model.
Figure 1 plots the accuracies and learning-rate schedules for a run of SWAP. During the large-batch phase (phase 1), all workers share a common model and have the same generalization performance. During the small-batch phase (phase 2) the learning rates for all the workers are the same but their testing accuracies differ as the stochasticity causes the models to diverge from each other. We also plot the test-accuracy of the averaged model that would result were we to stop phase 2 at that point. Note that the averaged model performs consistently better than each individual model.
4 LOSS LANDSCAPE VISUALIZATION AROUND SWAP ITERATES
To visualize the mechanism behind SWAP, we plot the error achieved by our test network on a plane that contains the outputs of the three different phases of the algorithm. Inspired by (Garipov et al., 2018) and (Izmailov et al., 2018), we pick orthogonal vectors u, v that span the plane which contains θ1, θ2, θ3. We plot the loss value generated by model θ = θ1+αu+βv at the location (α, β). To plot a loss value, we first generate a weight vector θ, compute the batch-norm statistics for that model (through one pass over the training data), and then evaluate the test and train accuracies.
In Figure 2, we plot the training and testing error for the CIFAR10 dataset. Here ‘LB’ marks the output of phase one, ‘SGD’ the output of a single worker after phase two, and ‘SWAP’ the final
Algorithm 1: Stochastic Weight Averaging in Parallel (SWAP) 1 Number of workers W ; Weight initialization θ0; t = 0 2 Training accuracy, τ , at which to exit phase one 3 Learning rate schedules LR1 and LR2 for phase one and two, respectively 4 Mini-batch sizes B1 and B2 for phase one and two, respectively 5 Gradient of loss function for sample i at weight θ: gi 6 SGDUpdate(·) : A function that updates the weights using SGD with momentum and weight
decay 7 Phase 1: 8 while Training accuracy ≤ τ do 9 η ← LR1(t)
10 for w in [0, ...,W − 1] In parallel do 11 Bw ← random sub-sample of training data with size B1W 12 gw ← W|B1| ∑ i∈Bw g
i worker gradient 13 end 14 gt ← 1W ∑ gw synchronization of worker gradients 15 θt+1 ← θt + SGDUpdate(ηt, gt, gt−1, · · · ) ; /* first order method update */ 16 t = t+ 1; T = t 17 end 18 Phase 2: 19 for t in [T, T +Q] do 20 η ← LR2(t− T ) 21 for w in [0, ...,W − 1] In parallel do 22 Bw ← random sub-sample of training data with size B2 23 gw ← 1|B2| ∑ i∈Bw g
i worker gradient 24 θwt+1 ← θwt + SGDUpdate(ηt, gwt , gwt−1, · · · ) ; /* first order method update at local worker */ 25 end 26 end /* We get W different models at the end of phase 2 */
27 Phase 3: θ̂` ← 1W ∑ θiT+Q produce averaged model 28 Compute batch-norm statistics for θ̂` to produce θ` Result: Final model θ`
model. Color codes correspond to error measures at the points interpolated on the plane. In Figure 2a, we observe that the level-sets of the training error (restricted to this plane) form an almost convex basin and that both the output of phase 1 (‘LB’)2 and the output of one of the workers of phase 2 (‘SGD’) lie in the outer edges of the basin. Importantly, during phase 2 the model traversed to a different side of the basin (and not to the center). Also, the final model (‘SWAP’) is closer to the center of the basin.
When we visualize these three points on the test loss landscape (Figure 2b), we observe that the variations in the topology of the basin cause the ‘LB’ and ‘SGD’ points to fall in regions of higher error. But, since the ‘SWAP’ point is closer to the center of the basin, it is less affected by the change in topology. In Figure 3, we neglect the ‘LB’ point and plot the plane spanned by three workers ‘SGD1’, ‘SGD2’, ‘SGD3’. In Figure 3a, we can observe that these points lie at different sides of the training error basin while ‘SWAP’ is closer to the center. In Figure 3b, we observe that the change in topology causes the worker points to lie in regions of higher testing errors than ‘SWAP’, which is again close to the center of both basins. For reference, we have also plotted the best model that can be generated by this region of the plane.
2Recall that the weights ‘LB’ are obtained by stopping the large-batch training early in phase 1. Hence, the training error for ‘LB’ is worse than ‘SGD’ and ‘SWAP’.
4.1 SAMPLING FROM INDEPENDENT RUNS OF SGD OR SAMPLING FROM ONE
In (Mandt et al., 2017), the authors argue that in the later stages of SGD the weight iterates behave similar to an Ornstein Uhlenbeck process. So, by maintaining a constant learning rate the SGD iterates should reach a stationary distribution that is similar to a high-dimensional Gaussian. This distribution is centered at the local minimum, has a covariance that grows proportionally with the learning rate, inversely proportional to the batch size and has a shape that depends on both the Hessian of the mean loss and covariance of the gradient.
The authors of (Izmailov et al., 2018) argue that by virtue of being a high dimensional Gaussian all the mass of the distribution is concentrated near the ‘shell’ of the ellipsoid, and therefore, it is unlikely for SGD to access the interior. They further argue that by sampling weights from an SGD run (leaving enough time steps between them) will choose weights that are spread out on the surface of this ellipsoid and their average will be closer to the center.
Without any further assumptions, we can justify sampling from different SGD runs (as done in phase 2 during SWAP). As long as all runs start in the same basin of attraction, and provided the model from (Mandt et al., 2017) holds, all runs will converge to the same stationary distribution, and each run can generate independent samples from it.
4.2 ORTHOGONALITY OF THE GRADIENT AND THE DIRECTION TO THE CENTER OF BASIN
To win some intuition on the advantage that SWA and SWAP have over SGD, we measure the cosine similarity between the gradient descent direction, −gi, and the direction towards the output of SWAP, ∆θ = θswap − θi. In Figure 4, we see that the cosine similarity, 〈∆θ,−gi〉‖gi‖‖∆θ‖ , decreases as the training enters its later stages. We believe that towards the end of training, the angle between the gradient direction and the directions toward the center of the basin is large, therefore the process moves mostly orthogonally to the basin, and progress slows. However, averaging samples from different sides of the basin can (and does) make faster progress towards the center.
5 EXPERIMENTS
In this section we evaluate the performance of SWAP for image classification tasks on the CIFAR10, CIFAR100, and ImageNet datasets.
5.1 CIFAR10 AND CIFAR100
For the experiments in this subsection, we found the best hyper-parameters using grid searches (see Appendix A for details). We train using mini-batch SGD with Nesterov momentum (set to 0.9) and weight decay of 5×10−4. We augment the data using cutout (DeVries & Taylor, 2017) and use a fastto-train custom ResNet 9 from a submission 3 to the DAWNBench leaderboard (Coleman et al.). All experiments were run on one machine with 8 NVIDIA Tesla V100 GPUs and use Horovod (Sergeev & Del Balso, 2018) to distribute the computation. All statistics were collected over 10 different runs.
CIFAR10: For these experiments, we used the following settings—SWAP phase one: 4096 samples per batch using 8 GPUs (512 samples per GPU). Phase one is terminated when the training accuracy reaches 98% (on average 108 epochs). SWAP phase two: 8 workers with one GPU each and 512 samples per batch for 30 epochs. The experiment that uses only large-batches had 4096 samples per batch across 8 GPUs and is run for 150 epochs. The experiments that use only small-batches had 512 samples per batch on 2 GPUs and is trained for 100 epochs.
Table 1 compares the best test accuracies and corresponding training times for models trained with small-batch only, with large-batch only, and with SWAP. We report the average accuracy of the workers before averaging and the accuracy of the final model.
CIFAR100: For these experiments, we use the following settings—SWAP phase one: 2048 samples per batch using 8 GPUs (256 samples per GPU). Phase one exits when the training accuracy reaches 90% (on average 112 epochs). SWAP phase two: 8 workers with one GPU each and 128 samples per batch, training for for 10 epochs. The experiments that use only large-batch training were run for 150 epochs with batches of 2048 on 8 GPUs The experiments that use only small-batch were trained for 150 epochs using batches of 128 on 1 GPU.
Table 2 compares the best test accuracies and corresponding training times for models trained with only small-batches (for 150 epochs), with only large-batches (for 150 epochs), and with SWAP.
3https://github.com/davidcpage/cifar10-fast
For SWAP, we report test accuracies obtained using the last SGD iterate before averaging, and test accuracy of the final model obtained after averaging. We observe significant improvement in test accuracies after averaging the models.
For both CIFAR 10 and CIFAR100, training with small-batches achieves higher testing accuracy than training with large-batches but takes much longer to train. SWAP, however, terminates in time comparable to the large-batch run but achieves accuracies on par (or better) than small batch training.
Achieving state of the art training speeds for CIFAR10: At the time of writing the front-runner of the DAWNBench competition takes 37 seconds with 4 Tesla V100 GPUs to train CIFAR10 to 94% test accuracy. Using SWAP with 8 Tesla V100 GPUs, a phase one batch size of 2048 samples and 28 epochs, and a phase two batch size of 256 samples for one epoch is able to reach the same accuracy in 27 seconds.
5.2 EXPERIMENTS ON IMAGENET
We use SWAP to accelerate a publicly available fast-to-train ImageNet model with published learning rate and batch size schedules 4. The default settings for this code modify the learning-rates and batch sizes throughout the optimization (see Figure 5). Our small-batch experiments train ImageNet for 28 epochs using the published schedules with no modification and are run on 8 Tesla V100 GPUs. Our large-batch experiments modify the schedules by doubling the batch size and doubling the learning rates (see Figure 5) and are run on 16 Tesla V100 GPUs. For SWAP phase 1, we use the large-batch settings for 22 epochs, and for SWAP phase 2, we run two independent workers each with 8 GPUs using the settings for small-batches for 6 epochs.
We observe that doubling the batch size reduces the Top1 and Top5 test accuracies with respect to the small-batch run. SWAP, however, recovers the generalization performance at substantially reduced training times. Our results are compiled in Table 3 (the statistics were collected over 3 runs). We believe it’s worthy of mention that these accelerations were achieved with no tuning other than increasing the learning rates proportionally to the increase in batch size and reverting to the original schedule when transitioning between phases.
5.3 EMPIRICAL COMPARISON OF SWA AND SWAP
We now compare SWAP with SWA: the sequential weight averaging algorithm from Izmailov et al. (2018). For the experiments in this section, we use the CIFAR100 dataset. We sample the same number of models for both SWA and SWAP and maintain the same number of epochs per sample. For SWA, we sample each model with 10 epochs in-between and average them to get the final model. For SWAP, we run 8 independent workers for 10 epochs each and use their average as the final model.
Large-batch SWA: We explore if SWA can recover the test accuracy of small-batch training on a large-batch training run. We use the same (large) batch size throughout. We follow an initial training cycle with cyclic learning rates (with cycles of 10 epochs) to sample 8 models (one from the end of each cycle). See Figure 6a for an illustration of the learning rate schedule.
As expected we observe that the large-batch training run achieves lower training accuracy, but surprisingly SWA was unable to improve it (see Table 4, row 1).
Large-batch followed by small-batch SWA: We evaluate the effect of executing SWA using smallbatches after a large-batch training run. We interrupt the large-batch phase at the same accuracy we interrupt phase 1 of our CIFAR100 experiment (Table 2). In this case, the small-batch phase uses a single worker and samples the models sequentially. SWA is able to reach the test accuracy of a small-batch run but requires more than three times longer than SWAP to compute the model (see Table 4, row 2). An illustration of the learning rate schedule is provided in Figure 6b.
Small-batch SWA and SWAP: We start the SWA cyclic learning rate schedule from the best model found by solely small-batch training (table 2, row 1). Since the cycle length and cycle count are fixed, the only free parameter is the peak learning rate. We select this using a grid-search. Once the SWA schedule is specified, we re-use the peak learning rate settings in SWAP. We start phase two from the model that was generated as the output of phase 1 for the experiment on section 5.1 reported on table 2 rows 3 and 4. With these settings, small-batch SWA achieves better accuracy than SWAP (by around ∼ 0.9%) at 6.8x more training time. Next, we wish to explore the speed-up that SWAP achieves over SWA if the precision of SWA is set as a target. To that end, we relax the constraints on SWAP. By increasing the phase two schedule from one 10 epoch cycle to two 20 epoch cycles and sampling two models from each worker (16 models) the resulting model achieved a test accuracy of 79.11% in 241 seconds or 3.5x less time.
4Available at https://github.com/cybertronai/imagenet18 old
6 CONCLUSIONS AND FUTURE WORK
We propose Stochastic Weight Averaging in Parallel (SWAP), an algorithm that uses a variant of Stochastic Weight Averaging (SWA) to improve the generalization performance of a model trained with large mini-batches. Our algorithm uses large mini-batches to compute an approximate solution quickly and then refines it by averaging the weights of multiple models trained using small-batches. The final model obtained after averaging has good generalization performance and is trained in a shorter time. We believe that this variant and this application of SWA are novel.
We observed that using large-batches in the initial stages of training does not preclude the models from achieving good generalization performance. That is, by refining the output of a large-batch run, with models sampled sequentially as in SWA or in parallel as in SWAP, the resulting model is able to perform as well as the models trained using small-batches only. We confirm this in the image classification datasets CIFAR10, CIFAR100, and ImageNet.
Through visualizations, we complement the existing evidence that averaged weights are closer to the center of a training loss basin than the models produced by stochastic gradient descent. It’s interesting to note that the basin into which the large mini-batch run is converging to seems to be the same basin where the refined models are found. So, it is possible that regions with bad and good generalization performance are connected through regions of low training loss and, more so, that both belong to an almost convex basin. Our method requires the choice of (at least) one more hyperparameter: the transition point between the large-batch and small-batch. For our experiments, we chose this by using a grid search. A principled method to choose the transition point will be the focus of future work.
In future work we intend to explore the behavior of SWAP when used with other optimization schemes, such as Layer-wise Adaptive Rate Scaling (LARS) (You et al., 2017), mixed-precision training Jia et al. (2018), post-local SGD (Lin et al., 2018) or NovoGrad (Ginsburg et al., 2019). The design of SWAP allows us to substitute any of these for the large-batch stage, for example, we can use local SGD to accelerate the first stage of SWAP by reducing the communication overhead.
A HYPERPARAMETERS FOR CIFAR10 AND CIFAR100 EXPERIMENTS
We provide the parameters used in the experiments of Section 5.1. These were obtained by doing independent grid searches for each experiment. For all CIFAR experiments, the momentum and weight decay constants were kept at 0.9 and 5×10−4 respectively. Tables 5 and 6 list the remaining hyperparameters. When a stopping accuracy of 100% is listed, we mean that the maximum number of epochs were used. | 1. What is the main contribution of the paper, and how does it improve upon previous works?
2. What are the strengths and weaknesses of the proposed algorithm, particularly regarding its novelty and convergence properties?
3. How does the reviewer assess the clarity and friendliness of the paper's content, especially in explaining the Update() function and the differences between the proposed algorithm and prior works?
4. Are there any questions or concerns regarding the experimental results and their validation of the algorithm's performance? | Review | Review
This paper proposes a 2-stage SGD variant that improves the generalization. The experiments show good performance.
However, there are some weakness in this paper:
1. (Minor issue) The Update() function in Algorithm 1 seems to be something very general. However, it seems that Update() is simply SGD or SGD with (Nesterov) momentum, according to Section 5.1. Furthermore, the authors never explicitly explain what exact Update() function is used, which is very unfriendly to the readers.
2. The major issue is that the proposed algorithm and the contribution (improvement of generalization) is not novel. Phase 2 of Algorithm 2 is called local SGD, proposed in [1,2]. Local SGD also has variants with Polyak momentum and Nesterov momentum [3]. Furthermore, [4] has already proposed an algorithm, post-local SGD, which is basically the same as Algorithm 1 in this paper (run fully synchronous SGD first and then local SGD). Note that [4] also shows that post-local SGD converges to flatter minima, and results in better generalization. Please correct me if I'm wrong, and explain the difference between Algorithm 1 and (post-)local SGD in details.
----------
Reference
[1] Stich, Sebastian U.. “Local SGD Converges Fast and Communicates Little.” ArXiv abs/1805.09767 (2018).
[2] Yu, Hao et al. “Parallel Restarted SGD with Faster Convergence and Less Communication: Demystifying Why Model Averaging Works for Deep Learning.” AAAI (2018).
[3] Yu, Hao et al. “On the Linear Speedup Analysis of Communication Efficient Momentum SGD for Distributed Non-Convex Optimization.” ICML (2019).
[4] Lin, Tao et al. “Don't Use Large Mini-Batches, Use Local SGD.” ArXiv abs/1808.07217 (2018). |
ICLR | Title
Stochastic Weight Averaging in Parallel: Large-Batch Training That Generalizes Well
Abstract
We propose Stochastic Weight Averaging in Parallel (SWAP), an algorithm to accelerate DNN training. Our algorithm uses large mini-batches to compute an approximate solution quickly and then refines it by averaging the weights of multiple models computed independently and in parallel. The resulting models generalize equally well as those trained with small mini-batches but are produced in a substantially shorter time. We demonstrate the reduction in training time and the good generalization performance of the resulting models on the computer vision datasets CIFAR10, CIFAR100, and ImageNet.
1 INTRODUCTION
Stochastic gradient descent (SGD) and its variants are the de-facto methods to train deep neural networks (DNNs). Each iteration of SGD computes an estimate of the objective’s gradient by sampling a mini-batch of the available training data and computing the gradient of the loss restricted to the sampled data. A popular strategy to accelerate DNN training is to increase the mini-batch size together with the available computational resources. Larger mini-batches produce more precise gradient estimates; these allow for higher learning rates and achieve larger reductions of the training loss per iteration. In a distributed setting, multiple nodes can compute gradient estimates simultaneously on disjoint subsets of the mini-batch and produce a consensus estimate by averaging all estimates, with one synchronization event per iteration. Training with larger mini-batches requires fewer updates, thus fewer synchronization events, yielding good overall scaling behavior.
Even though the training loss can be reduced more efficiently, there is a maximum batch size after which the resulting model tends to have worse generalization performance (McCandlish et al., 2018; Keskar et al., 2016; Hoffer et al., 2017; Golmant et al., 2018; Shallue et al., 2018). This phenomenon forces practitioners to use batch sizes below those that achieve the maximum throughput and limits the usefulness of large-batch training strategies.
Stochastic Weight Averaging (SWA) (Izmailov et al., 2018) is a method that produces models with good generalization performance by averaging the weights of a set of models sampled from the final stages of a training run. As long as the models all lie in a region where the population loss is mostly convex, the average model can behave well, and in practice, it does.
We have observed that if instead of sampling multiple models from a sequence generated by SGD, we generate multiple independent SGD sequences and average models from each, the resulting model achieves similar generalization performance. Furthermore, if all the independent sequences use small-batches, but start from a model trained with large-batches, the resulting model achieves generalization performance comparable with a model trained solely with small-batches. Using these observations, we derive Stochastic Weight Averaging in Parallel (SWAP): A simple strategy to accelerate DNN training by better utilizing available compute resources. Our algorithm is simple to implement, fast and produces good results with minor tuning.
For several image classification tasks on popular computer vision datasets (CIFAR10, CIFAR100, and ImageNet), we show that SWAP achieves generalization performance comparable to models trained with small-batches but does so in time similar to that of a training run with large-batches. We use SWAP on some of the most efficient publicly available models to date, and show that it’s
∗Equal contribution †Work done during an internship at Apple Inc.
able to substantially reduce their training times. Furthermore, we are able to beat the state of the art for CIFAR10 and train in 68% of the time of the winning entry of the DAWNBench competition.1
2 RELATED WORK
The mechanism by which the training batch size affects the generalization performance is still unknown. A popular explanation is that because of the reduced noise, a model trained using larger mini-batches is more likely to get stuck in a sharper global minima. In (Keskar et al., 2016), the authors argue that sharp minima are sensitive to variations in the data because slight shifts in the location of the minimizer will result in large increases in average loss value. However, if flatness is taken to be the curvature as measured by the second order approximation of the loss, then counterexamples exist. In (Dinh et al., 2017), the authors transform a flat minimizer into a sharp one without changing the behavior of the model, and in (Li et al., 2018), the authors show the reverse behavior when weight-decay is not used.
In (McCandlish et al., 2018), the authors predict that the batch size can be increased up to a critical size without any drop in accuracy and empirically validate this claim. For example, the accuracy begins to drop for image classification on CIFAR10 when the batch sizes exceed 1k samples. They postulate that when the batch size is large, the mini-batch gradient is close to the full gradient, and further increasing the batch size will not significantly improve the signal to noise ratio.
In (Hoffer et al., 2017), the authors argue that, for a fixed number of epochs, using a larger batch size implies fewer model updates. They argue that changing the number of updates impacts the distance the weights travel away from their initialization and that this distance determines the generalization performance. They show that by training with large-batches for longer times (thus increasing the number of updates), the generalization performance of the model is recovered. Even though this large-batch strategy generates models that generalize well, it does so in more time than the smallbatch alternative.
Irrespective of the generalization performance, the batch size also affects the optimization process. In (Ma et al., 2017), the authors show that for convex functions in the over-parameterized setting, there is a critical batch size below which an iteration with a batch size of M is roughly equivalent to M iterations with a batch size of one, and batch-sizes larger than M do not improve the rate of convergence.
Methods which use adaptive batch sizes exist (Devarakonda et al., 2017; Goyal et al., 2017; Jia et al., 2018; Smith et al., 2017; You et al., 2017). However, most of these methods are either designed for specific datasets or require extensive hyper-parameter tuning. Furthermore, they ineffectively use the computational resources by reducing the batch size during part of the training.
Local SGD (Zhang et al., 2016; Stich, 2018; Li et al., 2019; Yu et al., 2019) is a distributed optimization algorithm that trades off gradient precision with communication costs by allowing workers to independently update their models for a few steps before synchronizing. Post-local SGD (Lin et al., 2018) is a variant, which refines the output of large-batch training with local-SGD. The authors have observed that the resulting model has better generalization than the model trained with large-batches and that their scheme achieves significant speedups. In this manner Post-local SGD is of a very similar vein than the present work. However, while Post-local SGD lets the models diverge for T iterations where T is in the order of tens, SWAP averges the models once after multiple epochs. For example, in our Imagenet exeperiments (see Sec. 5) we average our models after tens of thousands of updates, while Post-local SGD does after at most 32. Because of this difference, we believe that the mechanisms that power the success of SWAP and Post-local SGD must be different and point to different phenomena in DNN optimization.
Stochastic weight averaging (SWA) (Izmailov et al., 2018) is a method where models are sampled from the later stages of an SGD training run. When the weights of these models are averaged, they result in a model with much better generalization properties. This strategy is very effective and has been adopted in multiple domains: deep reinforcement learning (Nikishin et al.), semisupervised learning (Athiwaratkun et al., 2019), Bayesian inference (Maddox et al., 2019), lowprecision training (Yang et al., 2019). In this work, we adapt SWA to accelerate DNN training.
1The https://dawn.cs.stanford.edu/benchmark/
3 STOCHASTIC WEIGHT AVERAGING IN PARALLEL
We describe SWAP as an algorithm in three phases (see Algorithm 1): In the first phase, all workers train a single model by computing large mini-batch updates. Synchronization between workers is required at each iteration and a higher learning rate is used. In the second phase, each worker independently refines its copy of the model to produce a different set of weights. Workers use a smaller batch size, a lower learning rate, and different randomizations of the data. No synchronization between workers is required in this phase. The last phase consists of averaging the weights of the resulting models and computing new batch-normalization statistics to produce the final output.
Phase 1 is terminated before the training loss reaches zero or the training accuracy reaches 100% (for example, a few percentage points below 100%). We believe that stopping early precludes the optimization from getting stuck at a location where the gradients are too small and allows the following stage to improve the generalization performance. However, the optimal stopping accuracy is a hyper-parameter that requires tuning.
During phase 2, the batch size is appropriately reduced and small-batch training is performed independently and simultaneously. Here, each worker (or a subset of them) performs training using all the data, but sampling in different random order. Thus, after the end of the training process, each worker (or subset) will have produced a different model.
Figure 1 plots the accuracies and learning-rate schedules for a run of SWAP. During the large-batch phase (phase 1), all workers share a common model and have the same generalization performance. During the small-batch phase (phase 2) the learning rates for all the workers are the same but their testing accuracies differ as the stochasticity causes the models to diverge from each other. We also plot the test-accuracy of the averaged model that would result were we to stop phase 2 at that point. Note that the averaged model performs consistently better than each individual model.
4 LOSS LANDSCAPE VISUALIZATION AROUND SWAP ITERATES
To visualize the mechanism behind SWAP, we plot the error achieved by our test network on a plane that contains the outputs of the three different phases of the algorithm. Inspired by (Garipov et al., 2018) and (Izmailov et al., 2018), we pick orthogonal vectors u, v that span the plane which contains θ1, θ2, θ3. We plot the loss value generated by model θ = θ1+αu+βv at the location (α, β). To plot a loss value, we first generate a weight vector θ, compute the batch-norm statistics for that model (through one pass over the training data), and then evaluate the test and train accuracies.
In Figure 2, we plot the training and testing error for the CIFAR10 dataset. Here ‘LB’ marks the output of phase one, ‘SGD’ the output of a single worker after phase two, and ‘SWAP’ the final
Algorithm 1: Stochastic Weight Averaging in Parallel (SWAP) 1 Number of workers W ; Weight initialization θ0; t = 0 2 Training accuracy, τ , at which to exit phase one 3 Learning rate schedules LR1 and LR2 for phase one and two, respectively 4 Mini-batch sizes B1 and B2 for phase one and two, respectively 5 Gradient of loss function for sample i at weight θ: gi 6 SGDUpdate(·) : A function that updates the weights using SGD with momentum and weight
decay 7 Phase 1: 8 while Training accuracy ≤ τ do 9 η ← LR1(t)
10 for w in [0, ...,W − 1] In parallel do 11 Bw ← random sub-sample of training data with size B1W 12 gw ← W|B1| ∑ i∈Bw g
i worker gradient 13 end 14 gt ← 1W ∑ gw synchronization of worker gradients 15 θt+1 ← θt + SGDUpdate(ηt, gt, gt−1, · · · ) ; /* first order method update */ 16 t = t+ 1; T = t 17 end 18 Phase 2: 19 for t in [T, T +Q] do 20 η ← LR2(t− T ) 21 for w in [0, ...,W − 1] In parallel do 22 Bw ← random sub-sample of training data with size B2 23 gw ← 1|B2| ∑ i∈Bw g
i worker gradient 24 θwt+1 ← θwt + SGDUpdate(ηt, gwt , gwt−1, · · · ) ; /* first order method update at local worker */ 25 end 26 end /* We get W different models at the end of phase 2 */
27 Phase 3: θ̂` ← 1W ∑ θiT+Q produce averaged model 28 Compute batch-norm statistics for θ̂` to produce θ` Result: Final model θ`
model. Color codes correspond to error measures at the points interpolated on the plane. In Figure 2a, we observe that the level-sets of the training error (restricted to this plane) form an almost convex basin and that both the output of phase 1 (‘LB’)2 and the output of one of the workers of phase 2 (‘SGD’) lie in the outer edges of the basin. Importantly, during phase 2 the model traversed to a different side of the basin (and not to the center). Also, the final model (‘SWAP’) is closer to the center of the basin.
When we visualize these three points on the test loss landscape (Figure 2b), we observe that the variations in the topology of the basin cause the ‘LB’ and ‘SGD’ points to fall in regions of higher error. But, since the ‘SWAP’ point is closer to the center of the basin, it is less affected by the change in topology. In Figure 3, we neglect the ‘LB’ point and plot the plane spanned by three workers ‘SGD1’, ‘SGD2’, ‘SGD3’. In Figure 3a, we can observe that these points lie at different sides of the training error basin while ‘SWAP’ is closer to the center. In Figure 3b, we observe that the change in topology causes the worker points to lie in regions of higher testing errors than ‘SWAP’, which is again close to the center of both basins. For reference, we have also plotted the best model that can be generated by this region of the plane.
2Recall that the weights ‘LB’ are obtained by stopping the large-batch training early in phase 1. Hence, the training error for ‘LB’ is worse than ‘SGD’ and ‘SWAP’.
4.1 SAMPLING FROM INDEPENDENT RUNS OF SGD OR SAMPLING FROM ONE
In (Mandt et al., 2017), the authors argue that in the later stages of SGD the weight iterates behave similar to an Ornstein Uhlenbeck process. So, by maintaining a constant learning rate the SGD iterates should reach a stationary distribution that is similar to a high-dimensional Gaussian. This distribution is centered at the local minimum, has a covariance that grows proportionally with the learning rate, inversely proportional to the batch size and has a shape that depends on both the Hessian of the mean loss and covariance of the gradient.
The authors of (Izmailov et al., 2018) argue that by virtue of being a high dimensional Gaussian all the mass of the distribution is concentrated near the ‘shell’ of the ellipsoid, and therefore, it is unlikely for SGD to access the interior. They further argue that by sampling weights from an SGD run (leaving enough time steps between them) will choose weights that are spread out on the surface of this ellipsoid and their average will be closer to the center.
Without any further assumptions, we can justify sampling from different SGD runs (as done in phase 2 during SWAP). As long as all runs start in the same basin of attraction, and provided the model from (Mandt et al., 2017) holds, all runs will converge to the same stationary distribution, and each run can generate independent samples from it.
4.2 ORTHOGONALITY OF THE GRADIENT AND THE DIRECTION TO THE CENTER OF BASIN
To win some intuition on the advantage that SWA and SWAP have over SGD, we measure the cosine similarity between the gradient descent direction, −gi, and the direction towards the output of SWAP, ∆θ = θswap − θi. In Figure 4, we see that the cosine similarity, 〈∆θ,−gi〉‖gi‖‖∆θ‖ , decreases as the training enters its later stages. We believe that towards the end of training, the angle between the gradient direction and the directions toward the center of the basin is large, therefore the process moves mostly orthogonally to the basin, and progress slows. However, averaging samples from different sides of the basin can (and does) make faster progress towards the center.
5 EXPERIMENTS
In this section we evaluate the performance of SWAP for image classification tasks on the CIFAR10, CIFAR100, and ImageNet datasets.
5.1 CIFAR10 AND CIFAR100
For the experiments in this subsection, we found the best hyper-parameters using grid searches (see Appendix A for details). We train using mini-batch SGD with Nesterov momentum (set to 0.9) and weight decay of 5×10−4. We augment the data using cutout (DeVries & Taylor, 2017) and use a fastto-train custom ResNet 9 from a submission 3 to the DAWNBench leaderboard (Coleman et al.). All experiments were run on one machine with 8 NVIDIA Tesla V100 GPUs and use Horovod (Sergeev & Del Balso, 2018) to distribute the computation. All statistics were collected over 10 different runs.
CIFAR10: For these experiments, we used the following settings—SWAP phase one: 4096 samples per batch using 8 GPUs (512 samples per GPU). Phase one is terminated when the training accuracy reaches 98% (on average 108 epochs). SWAP phase two: 8 workers with one GPU each and 512 samples per batch for 30 epochs. The experiment that uses only large-batches had 4096 samples per batch across 8 GPUs and is run for 150 epochs. The experiments that use only small-batches had 512 samples per batch on 2 GPUs and is trained for 100 epochs.
Table 1 compares the best test accuracies and corresponding training times for models trained with small-batch only, with large-batch only, and with SWAP. We report the average accuracy of the workers before averaging and the accuracy of the final model.
CIFAR100: For these experiments, we use the following settings—SWAP phase one: 2048 samples per batch using 8 GPUs (256 samples per GPU). Phase one exits when the training accuracy reaches 90% (on average 112 epochs). SWAP phase two: 8 workers with one GPU each and 128 samples per batch, training for for 10 epochs. The experiments that use only large-batch training were run for 150 epochs with batches of 2048 on 8 GPUs The experiments that use only small-batch were trained for 150 epochs using batches of 128 on 1 GPU.
Table 2 compares the best test accuracies and corresponding training times for models trained with only small-batches (for 150 epochs), with only large-batches (for 150 epochs), and with SWAP.
3https://github.com/davidcpage/cifar10-fast
For SWAP, we report test accuracies obtained using the last SGD iterate before averaging, and test accuracy of the final model obtained after averaging. We observe significant improvement in test accuracies after averaging the models.
For both CIFAR 10 and CIFAR100, training with small-batches achieves higher testing accuracy than training with large-batches but takes much longer to train. SWAP, however, terminates in time comparable to the large-batch run but achieves accuracies on par (or better) than small batch training.
Achieving state of the art training speeds for CIFAR10: At the time of writing the front-runner of the DAWNBench competition takes 37 seconds with 4 Tesla V100 GPUs to train CIFAR10 to 94% test accuracy. Using SWAP with 8 Tesla V100 GPUs, a phase one batch size of 2048 samples and 28 epochs, and a phase two batch size of 256 samples for one epoch is able to reach the same accuracy in 27 seconds.
5.2 EXPERIMENTS ON IMAGENET
We use SWAP to accelerate a publicly available fast-to-train ImageNet model with published learning rate and batch size schedules 4. The default settings for this code modify the learning-rates and batch sizes throughout the optimization (see Figure 5). Our small-batch experiments train ImageNet for 28 epochs using the published schedules with no modification and are run on 8 Tesla V100 GPUs. Our large-batch experiments modify the schedules by doubling the batch size and doubling the learning rates (see Figure 5) and are run on 16 Tesla V100 GPUs. For SWAP phase 1, we use the large-batch settings for 22 epochs, and for SWAP phase 2, we run two independent workers each with 8 GPUs using the settings for small-batches for 6 epochs.
We observe that doubling the batch size reduces the Top1 and Top5 test accuracies with respect to the small-batch run. SWAP, however, recovers the generalization performance at substantially reduced training times. Our results are compiled in Table 3 (the statistics were collected over 3 runs). We believe it’s worthy of mention that these accelerations were achieved with no tuning other than increasing the learning rates proportionally to the increase in batch size and reverting to the original schedule when transitioning between phases.
5.3 EMPIRICAL COMPARISON OF SWA AND SWAP
We now compare SWAP with SWA: the sequential weight averaging algorithm from Izmailov et al. (2018). For the experiments in this section, we use the CIFAR100 dataset. We sample the same number of models for both SWA and SWAP and maintain the same number of epochs per sample. For SWA, we sample each model with 10 epochs in-between and average them to get the final model. For SWAP, we run 8 independent workers for 10 epochs each and use their average as the final model.
Large-batch SWA: We explore if SWA can recover the test accuracy of small-batch training on a large-batch training run. We use the same (large) batch size throughout. We follow an initial training cycle with cyclic learning rates (with cycles of 10 epochs) to sample 8 models (one from the end of each cycle). See Figure 6a for an illustration of the learning rate schedule.
As expected we observe that the large-batch training run achieves lower training accuracy, but surprisingly SWA was unable to improve it (see Table 4, row 1).
Large-batch followed by small-batch SWA: We evaluate the effect of executing SWA using smallbatches after a large-batch training run. We interrupt the large-batch phase at the same accuracy we interrupt phase 1 of our CIFAR100 experiment (Table 2). In this case, the small-batch phase uses a single worker and samples the models sequentially. SWA is able to reach the test accuracy of a small-batch run but requires more than three times longer than SWAP to compute the model (see Table 4, row 2). An illustration of the learning rate schedule is provided in Figure 6b.
Small-batch SWA and SWAP: We start the SWA cyclic learning rate schedule from the best model found by solely small-batch training (table 2, row 1). Since the cycle length and cycle count are fixed, the only free parameter is the peak learning rate. We select this using a grid-search. Once the SWA schedule is specified, we re-use the peak learning rate settings in SWAP. We start phase two from the model that was generated as the output of phase 1 for the experiment on section 5.1 reported on table 2 rows 3 and 4. With these settings, small-batch SWA achieves better accuracy than SWAP (by around ∼ 0.9%) at 6.8x more training time. Next, we wish to explore the speed-up that SWAP achieves over SWA if the precision of SWA is set as a target. To that end, we relax the constraints on SWAP. By increasing the phase two schedule from one 10 epoch cycle to two 20 epoch cycles and sampling two models from each worker (16 models) the resulting model achieved a test accuracy of 79.11% in 241 seconds or 3.5x less time.
4Available at https://github.com/cybertronai/imagenet18 old
6 CONCLUSIONS AND FUTURE WORK
We propose Stochastic Weight Averaging in Parallel (SWAP), an algorithm that uses a variant of Stochastic Weight Averaging (SWA) to improve the generalization performance of a model trained with large mini-batches. Our algorithm uses large mini-batches to compute an approximate solution quickly and then refines it by averaging the weights of multiple models trained using small-batches. The final model obtained after averaging has good generalization performance and is trained in a shorter time. We believe that this variant and this application of SWA are novel.
We observed that using large-batches in the initial stages of training does not preclude the models from achieving good generalization performance. That is, by refining the output of a large-batch run, with models sampled sequentially as in SWA or in parallel as in SWAP, the resulting model is able to perform as well as the models trained using small-batches only. We confirm this in the image classification datasets CIFAR10, CIFAR100, and ImageNet.
Through visualizations, we complement the existing evidence that averaged weights are closer to the center of a training loss basin than the models produced by stochastic gradient descent. It’s interesting to note that the basin into which the large mini-batch run is converging to seems to be the same basin where the refined models are found. So, it is possible that regions with bad and good generalization performance are connected through regions of low training loss and, more so, that both belong to an almost convex basin. Our method requires the choice of (at least) one more hyperparameter: the transition point between the large-batch and small-batch. For our experiments, we chose this by using a grid search. A principled method to choose the transition point will be the focus of future work.
In future work we intend to explore the behavior of SWAP when used with other optimization schemes, such as Layer-wise Adaptive Rate Scaling (LARS) (You et al., 2017), mixed-precision training Jia et al. (2018), post-local SGD (Lin et al., 2018) or NovoGrad (Ginsburg et al., 2019). The design of SWAP allows us to substitute any of these for the large-batch stage, for example, we can use local SGD to accelerate the first stage of SWAP by reducing the communication overhead.
A HYPERPARAMETERS FOR CIFAR10 AND CIFAR100 EXPERIMENTS
We provide the parameters used in the experiments of Section 5.1. These were obtained by doing independent grid searches for each experiment. For all CIFAR experiments, the momentum and weight decay constants were kept at 0.9 and 5×10−4 respectively. Tables 5 and 6 list the remaining hyperparameters. When a stopping accuracy of 100% is listed, we mean that the maximum number of epochs were used. | 1. What is the focus of the paper regarding parallel training for deep neural networks?
2. What are the strengths of the proposed approach, particularly in its ability to utilize large-batch training effectively?
3. What are the weaknesses of the paper, especially regarding the lack of theoretical analysis?
4. How sensitive is the algorithm to the choice of the transition point, and how was it tuned?
5. What is the relationship between SWAP and local SGD methods?
6. How does the proposed method compare to other optimization methods that have been studied in the literature, such as those mentioned in the review? | Review | Review
This paper proposes a parallel version of the stochastic weight averaging method. It utilizes to phases to train the DNN. The first phase consists of distributed large-batch training, where the learning rate is scaled linearly with respect to the scale of the batch size. The second phase consists of using small batches with SWA to obtain the final model. Experiments verify that this method is able to achieve similar generalization performance as small-batch methods in less training time. A comparison against small-batch SWA, large-batch SWA, etc.
Strengths:
The proposed algorithm is a natural extension to SWA, and appears to give good generalization performance. It is able to utilize large-batch training effectively, which is perhaps surprising given the amount of tuning necessary in Shallue, et al. (2018) in order to achieve good performance. The experiments are well-detailed, and some interesting visualizations, graphs, and empirical analyses are provided.
Weaknesses:
I think that this paper is fairly complete; the only question is whether or not it contains enough novelty, as it is a natural extension of SWA to the parallelized setting. No theoretical analysis of the algorithm is given.
Some questions and comments:
- How sensitive is the algorithm to the choice of the transition point? How was the transition point tuned?
- How large of a batch size can one use before this approach breaks down or is no longer efficient?
- In Figure 2, LB is shown to obtain worse training error than SGD. What is the reason for this? This seems contrary to theory (assuming one has already converged to a neighborhood of the solution).
- The authors comment in the conclusion that "it is possible that regions with bad and good generalization performance are connected through regions of low training loss". Can one check if this is related to the invariances described in Dinh, et al. (2017)?
- What is the relationship between SWAP and methods on local SGD?
This paper is missing some classical optimization references on increasing batch sizes, which has been well-studied within that literature:
[1] Byrd, Richard H., et al. "Sample size selection in optimization methods for machine learning." Mathematical programming 134.1 (2012): 127-155.
[2] Bollapragada, Raghu, Richard Byrd, and Jorge Nocedal. "Adaptive sampling strategies for stochastic optimization." SIAM Journal on Optimization 28.4 (2018): 3312-3343.
[3] Bollapragada, Raghu, et al. "A progressive batching L-BFGS method for machine learning." arXiv preprint arXiv:1802.05374 (2018).
[4] Friedlander, Michael P., and Mark Schmidt. "Hybrid deterministic-stochastic methods for data fitting." SIAM Journal on Scientific Computing 34.3 (2012): A1380-A1405.
Although developing some theory for this algorithm would be beneficial, this paper performs a comprehensive set of experiments and is well-written. For these reasons, I'm inclined to accept the paper. |
ICLR | Title
Stochastic Weight Averaging in Parallel: Large-Batch Training That Generalizes Well
Abstract
We propose Stochastic Weight Averaging in Parallel (SWAP), an algorithm to accelerate DNN training. Our algorithm uses large mini-batches to compute an approximate solution quickly and then refines it by averaging the weights of multiple models computed independently and in parallel. The resulting models generalize equally well as those trained with small mini-batches but are produced in a substantially shorter time. We demonstrate the reduction in training time and the good generalization performance of the resulting models on the computer vision datasets CIFAR10, CIFAR100, and ImageNet.
1 INTRODUCTION
Stochastic gradient descent (SGD) and its variants are the de-facto methods to train deep neural networks (DNNs). Each iteration of SGD computes an estimate of the objective’s gradient by sampling a mini-batch of the available training data and computing the gradient of the loss restricted to the sampled data. A popular strategy to accelerate DNN training is to increase the mini-batch size together with the available computational resources. Larger mini-batches produce more precise gradient estimates; these allow for higher learning rates and achieve larger reductions of the training loss per iteration. In a distributed setting, multiple nodes can compute gradient estimates simultaneously on disjoint subsets of the mini-batch and produce a consensus estimate by averaging all estimates, with one synchronization event per iteration. Training with larger mini-batches requires fewer updates, thus fewer synchronization events, yielding good overall scaling behavior.
Even though the training loss can be reduced more efficiently, there is a maximum batch size after which the resulting model tends to have worse generalization performance (McCandlish et al., 2018; Keskar et al., 2016; Hoffer et al., 2017; Golmant et al., 2018; Shallue et al., 2018). This phenomenon forces practitioners to use batch sizes below those that achieve the maximum throughput and limits the usefulness of large-batch training strategies.
Stochastic Weight Averaging (SWA) (Izmailov et al., 2018) is a method that produces models with good generalization performance by averaging the weights of a set of models sampled from the final stages of a training run. As long as the models all lie in a region where the population loss is mostly convex, the average model can behave well, and in practice, it does.
We have observed that if instead of sampling multiple models from a sequence generated by SGD, we generate multiple independent SGD sequences and average models from each, the resulting model achieves similar generalization performance. Furthermore, if all the independent sequences use small-batches, but start from a model trained with large-batches, the resulting model achieves generalization performance comparable with a model trained solely with small-batches. Using these observations, we derive Stochastic Weight Averaging in Parallel (SWAP): A simple strategy to accelerate DNN training by better utilizing available compute resources. Our algorithm is simple to implement, fast and produces good results with minor tuning.
For several image classification tasks on popular computer vision datasets (CIFAR10, CIFAR100, and ImageNet), we show that SWAP achieves generalization performance comparable to models trained with small-batches but does so in time similar to that of a training run with large-batches. We use SWAP on some of the most efficient publicly available models to date, and show that it’s
∗Equal contribution †Work done during an internship at Apple Inc.
able to substantially reduce their training times. Furthermore, we are able to beat the state of the art for CIFAR10 and train in 68% of the time of the winning entry of the DAWNBench competition.1
2 RELATED WORK
The mechanism by which the training batch size affects the generalization performance is still unknown. A popular explanation is that because of the reduced noise, a model trained using larger mini-batches is more likely to get stuck in a sharper global minima. In (Keskar et al., 2016), the authors argue that sharp minima are sensitive to variations in the data because slight shifts in the location of the minimizer will result in large increases in average loss value. However, if flatness is taken to be the curvature as measured by the second order approximation of the loss, then counterexamples exist. In (Dinh et al., 2017), the authors transform a flat minimizer into a sharp one without changing the behavior of the model, and in (Li et al., 2018), the authors show the reverse behavior when weight-decay is not used.
In (McCandlish et al., 2018), the authors predict that the batch size can be increased up to a critical size without any drop in accuracy and empirically validate this claim. For example, the accuracy begins to drop for image classification on CIFAR10 when the batch sizes exceed 1k samples. They postulate that when the batch size is large, the mini-batch gradient is close to the full gradient, and further increasing the batch size will not significantly improve the signal to noise ratio.
In (Hoffer et al., 2017), the authors argue that, for a fixed number of epochs, using a larger batch size implies fewer model updates. They argue that changing the number of updates impacts the distance the weights travel away from their initialization and that this distance determines the generalization performance. They show that by training with large-batches for longer times (thus increasing the number of updates), the generalization performance of the model is recovered. Even though this large-batch strategy generates models that generalize well, it does so in more time than the smallbatch alternative.
Irrespective of the generalization performance, the batch size also affects the optimization process. In (Ma et al., 2017), the authors show that for convex functions in the over-parameterized setting, there is a critical batch size below which an iteration with a batch size of M is roughly equivalent to M iterations with a batch size of one, and batch-sizes larger than M do not improve the rate of convergence.
Methods which use adaptive batch sizes exist (Devarakonda et al., 2017; Goyal et al., 2017; Jia et al., 2018; Smith et al., 2017; You et al., 2017). However, most of these methods are either designed for specific datasets or require extensive hyper-parameter tuning. Furthermore, they ineffectively use the computational resources by reducing the batch size during part of the training.
Local SGD (Zhang et al., 2016; Stich, 2018; Li et al., 2019; Yu et al., 2019) is a distributed optimization algorithm that trades off gradient precision with communication costs by allowing workers to independently update their models for a few steps before synchronizing. Post-local SGD (Lin et al., 2018) is a variant, which refines the output of large-batch training with local-SGD. The authors have observed that the resulting model has better generalization than the model trained with large-batches and that their scheme achieves significant speedups. In this manner Post-local SGD is of a very similar vein than the present work. However, while Post-local SGD lets the models diverge for T iterations where T is in the order of tens, SWAP averges the models once after multiple epochs. For example, in our Imagenet exeperiments (see Sec. 5) we average our models after tens of thousands of updates, while Post-local SGD does after at most 32. Because of this difference, we believe that the mechanisms that power the success of SWAP and Post-local SGD must be different and point to different phenomena in DNN optimization.
Stochastic weight averaging (SWA) (Izmailov et al., 2018) is a method where models are sampled from the later stages of an SGD training run. When the weights of these models are averaged, they result in a model with much better generalization properties. This strategy is very effective and has been adopted in multiple domains: deep reinforcement learning (Nikishin et al.), semisupervised learning (Athiwaratkun et al., 2019), Bayesian inference (Maddox et al., 2019), lowprecision training (Yang et al., 2019). In this work, we adapt SWA to accelerate DNN training.
1The https://dawn.cs.stanford.edu/benchmark/
3 STOCHASTIC WEIGHT AVERAGING IN PARALLEL
We describe SWAP as an algorithm in three phases (see Algorithm 1): In the first phase, all workers train a single model by computing large mini-batch updates. Synchronization between workers is required at each iteration and a higher learning rate is used. In the second phase, each worker independently refines its copy of the model to produce a different set of weights. Workers use a smaller batch size, a lower learning rate, and different randomizations of the data. No synchronization between workers is required in this phase. The last phase consists of averaging the weights of the resulting models and computing new batch-normalization statistics to produce the final output.
Phase 1 is terminated before the training loss reaches zero or the training accuracy reaches 100% (for example, a few percentage points below 100%). We believe that stopping early precludes the optimization from getting stuck at a location where the gradients are too small and allows the following stage to improve the generalization performance. However, the optimal stopping accuracy is a hyper-parameter that requires tuning.
During phase 2, the batch size is appropriately reduced and small-batch training is performed independently and simultaneously. Here, each worker (or a subset of them) performs training using all the data, but sampling in different random order. Thus, after the end of the training process, each worker (or subset) will have produced a different model.
Figure 1 plots the accuracies and learning-rate schedules for a run of SWAP. During the large-batch phase (phase 1), all workers share a common model and have the same generalization performance. During the small-batch phase (phase 2) the learning rates for all the workers are the same but their testing accuracies differ as the stochasticity causes the models to diverge from each other. We also plot the test-accuracy of the averaged model that would result were we to stop phase 2 at that point. Note that the averaged model performs consistently better than each individual model.
4 LOSS LANDSCAPE VISUALIZATION AROUND SWAP ITERATES
To visualize the mechanism behind SWAP, we plot the error achieved by our test network on a plane that contains the outputs of the three different phases of the algorithm. Inspired by (Garipov et al., 2018) and (Izmailov et al., 2018), we pick orthogonal vectors u, v that span the plane which contains θ1, θ2, θ3. We plot the loss value generated by model θ = θ1+αu+βv at the location (α, β). To plot a loss value, we first generate a weight vector θ, compute the batch-norm statistics for that model (through one pass over the training data), and then evaluate the test and train accuracies.
In Figure 2, we plot the training and testing error for the CIFAR10 dataset. Here ‘LB’ marks the output of phase one, ‘SGD’ the output of a single worker after phase two, and ‘SWAP’ the final
Algorithm 1: Stochastic Weight Averaging in Parallel (SWAP) 1 Number of workers W ; Weight initialization θ0; t = 0 2 Training accuracy, τ , at which to exit phase one 3 Learning rate schedules LR1 and LR2 for phase one and two, respectively 4 Mini-batch sizes B1 and B2 for phase one and two, respectively 5 Gradient of loss function for sample i at weight θ: gi 6 SGDUpdate(·) : A function that updates the weights using SGD with momentum and weight
decay 7 Phase 1: 8 while Training accuracy ≤ τ do 9 η ← LR1(t)
10 for w in [0, ...,W − 1] In parallel do 11 Bw ← random sub-sample of training data with size B1W 12 gw ← W|B1| ∑ i∈Bw g
i worker gradient 13 end 14 gt ← 1W ∑ gw synchronization of worker gradients 15 θt+1 ← θt + SGDUpdate(ηt, gt, gt−1, · · · ) ; /* first order method update */ 16 t = t+ 1; T = t 17 end 18 Phase 2: 19 for t in [T, T +Q] do 20 η ← LR2(t− T ) 21 for w in [0, ...,W − 1] In parallel do 22 Bw ← random sub-sample of training data with size B2 23 gw ← 1|B2| ∑ i∈Bw g
i worker gradient 24 θwt+1 ← θwt + SGDUpdate(ηt, gwt , gwt−1, · · · ) ; /* first order method update at local worker */ 25 end 26 end /* We get W different models at the end of phase 2 */
27 Phase 3: θ̂` ← 1W ∑ θiT+Q produce averaged model 28 Compute batch-norm statistics for θ̂` to produce θ` Result: Final model θ`
model. Color codes correspond to error measures at the points interpolated on the plane. In Figure 2a, we observe that the level-sets of the training error (restricted to this plane) form an almost convex basin and that both the output of phase 1 (‘LB’)2 and the output of one of the workers of phase 2 (‘SGD’) lie in the outer edges of the basin. Importantly, during phase 2 the model traversed to a different side of the basin (and not to the center). Also, the final model (‘SWAP’) is closer to the center of the basin.
When we visualize these three points on the test loss landscape (Figure 2b), we observe that the variations in the topology of the basin cause the ‘LB’ and ‘SGD’ points to fall in regions of higher error. But, since the ‘SWAP’ point is closer to the center of the basin, it is less affected by the change in topology. In Figure 3, we neglect the ‘LB’ point and plot the plane spanned by three workers ‘SGD1’, ‘SGD2’, ‘SGD3’. In Figure 3a, we can observe that these points lie at different sides of the training error basin while ‘SWAP’ is closer to the center. In Figure 3b, we observe that the change in topology causes the worker points to lie in regions of higher testing errors than ‘SWAP’, which is again close to the center of both basins. For reference, we have also plotted the best model that can be generated by this region of the plane.
2Recall that the weights ‘LB’ are obtained by stopping the large-batch training early in phase 1. Hence, the training error for ‘LB’ is worse than ‘SGD’ and ‘SWAP’.
4.1 SAMPLING FROM INDEPENDENT RUNS OF SGD OR SAMPLING FROM ONE
In (Mandt et al., 2017), the authors argue that in the later stages of SGD the weight iterates behave similar to an Ornstein Uhlenbeck process. So, by maintaining a constant learning rate the SGD iterates should reach a stationary distribution that is similar to a high-dimensional Gaussian. This distribution is centered at the local minimum, has a covariance that grows proportionally with the learning rate, inversely proportional to the batch size and has a shape that depends on both the Hessian of the mean loss and covariance of the gradient.
The authors of (Izmailov et al., 2018) argue that by virtue of being a high dimensional Gaussian all the mass of the distribution is concentrated near the ‘shell’ of the ellipsoid, and therefore, it is unlikely for SGD to access the interior. They further argue that by sampling weights from an SGD run (leaving enough time steps between them) will choose weights that are spread out on the surface of this ellipsoid and their average will be closer to the center.
Without any further assumptions, we can justify sampling from different SGD runs (as done in phase 2 during SWAP). As long as all runs start in the same basin of attraction, and provided the model from (Mandt et al., 2017) holds, all runs will converge to the same stationary distribution, and each run can generate independent samples from it.
4.2 ORTHOGONALITY OF THE GRADIENT AND THE DIRECTION TO THE CENTER OF BASIN
To win some intuition on the advantage that SWA and SWAP have over SGD, we measure the cosine similarity between the gradient descent direction, −gi, and the direction towards the output of SWAP, ∆θ = θswap − θi. In Figure 4, we see that the cosine similarity, 〈∆θ,−gi〉‖gi‖‖∆θ‖ , decreases as the training enters its later stages. We believe that towards the end of training, the angle between the gradient direction and the directions toward the center of the basin is large, therefore the process moves mostly orthogonally to the basin, and progress slows. However, averaging samples from different sides of the basin can (and does) make faster progress towards the center.
5 EXPERIMENTS
In this section we evaluate the performance of SWAP for image classification tasks on the CIFAR10, CIFAR100, and ImageNet datasets.
5.1 CIFAR10 AND CIFAR100
For the experiments in this subsection, we found the best hyper-parameters using grid searches (see Appendix A for details). We train using mini-batch SGD with Nesterov momentum (set to 0.9) and weight decay of 5×10−4. We augment the data using cutout (DeVries & Taylor, 2017) and use a fastto-train custom ResNet 9 from a submission 3 to the DAWNBench leaderboard (Coleman et al.). All experiments were run on one machine with 8 NVIDIA Tesla V100 GPUs and use Horovod (Sergeev & Del Balso, 2018) to distribute the computation. All statistics were collected over 10 different runs.
CIFAR10: For these experiments, we used the following settings—SWAP phase one: 4096 samples per batch using 8 GPUs (512 samples per GPU). Phase one is terminated when the training accuracy reaches 98% (on average 108 epochs). SWAP phase two: 8 workers with one GPU each and 512 samples per batch for 30 epochs. The experiment that uses only large-batches had 4096 samples per batch across 8 GPUs and is run for 150 epochs. The experiments that use only small-batches had 512 samples per batch on 2 GPUs and is trained for 100 epochs.
Table 1 compares the best test accuracies and corresponding training times for models trained with small-batch only, with large-batch only, and with SWAP. We report the average accuracy of the workers before averaging and the accuracy of the final model.
CIFAR100: For these experiments, we use the following settings—SWAP phase one: 2048 samples per batch using 8 GPUs (256 samples per GPU). Phase one exits when the training accuracy reaches 90% (on average 112 epochs). SWAP phase two: 8 workers with one GPU each and 128 samples per batch, training for for 10 epochs. The experiments that use only large-batch training were run for 150 epochs with batches of 2048 on 8 GPUs The experiments that use only small-batch were trained for 150 epochs using batches of 128 on 1 GPU.
Table 2 compares the best test accuracies and corresponding training times for models trained with only small-batches (for 150 epochs), with only large-batches (for 150 epochs), and with SWAP.
3https://github.com/davidcpage/cifar10-fast
For SWAP, we report test accuracies obtained using the last SGD iterate before averaging, and test accuracy of the final model obtained after averaging. We observe significant improvement in test accuracies after averaging the models.
For both CIFAR 10 and CIFAR100, training with small-batches achieves higher testing accuracy than training with large-batches but takes much longer to train. SWAP, however, terminates in time comparable to the large-batch run but achieves accuracies on par (or better) than small batch training.
Achieving state of the art training speeds for CIFAR10: At the time of writing the front-runner of the DAWNBench competition takes 37 seconds with 4 Tesla V100 GPUs to train CIFAR10 to 94% test accuracy. Using SWAP with 8 Tesla V100 GPUs, a phase one batch size of 2048 samples and 28 epochs, and a phase two batch size of 256 samples for one epoch is able to reach the same accuracy in 27 seconds.
5.2 EXPERIMENTS ON IMAGENET
We use SWAP to accelerate a publicly available fast-to-train ImageNet model with published learning rate and batch size schedules 4. The default settings for this code modify the learning-rates and batch sizes throughout the optimization (see Figure 5). Our small-batch experiments train ImageNet for 28 epochs using the published schedules with no modification and are run on 8 Tesla V100 GPUs. Our large-batch experiments modify the schedules by doubling the batch size and doubling the learning rates (see Figure 5) and are run on 16 Tesla V100 GPUs. For SWAP phase 1, we use the large-batch settings for 22 epochs, and for SWAP phase 2, we run two independent workers each with 8 GPUs using the settings for small-batches for 6 epochs.
We observe that doubling the batch size reduces the Top1 and Top5 test accuracies with respect to the small-batch run. SWAP, however, recovers the generalization performance at substantially reduced training times. Our results are compiled in Table 3 (the statistics were collected over 3 runs). We believe it’s worthy of mention that these accelerations were achieved with no tuning other than increasing the learning rates proportionally to the increase in batch size and reverting to the original schedule when transitioning between phases.
5.3 EMPIRICAL COMPARISON OF SWA AND SWAP
We now compare SWAP with SWA: the sequential weight averaging algorithm from Izmailov et al. (2018). For the experiments in this section, we use the CIFAR100 dataset. We sample the same number of models for both SWA and SWAP and maintain the same number of epochs per sample. For SWA, we sample each model with 10 epochs in-between and average them to get the final model. For SWAP, we run 8 independent workers for 10 epochs each and use their average as the final model.
Large-batch SWA: We explore if SWA can recover the test accuracy of small-batch training on a large-batch training run. We use the same (large) batch size throughout. We follow an initial training cycle with cyclic learning rates (with cycles of 10 epochs) to sample 8 models (one from the end of each cycle). See Figure 6a for an illustration of the learning rate schedule.
As expected we observe that the large-batch training run achieves lower training accuracy, but surprisingly SWA was unable to improve it (see Table 4, row 1).
Large-batch followed by small-batch SWA: We evaluate the effect of executing SWA using smallbatches after a large-batch training run. We interrupt the large-batch phase at the same accuracy we interrupt phase 1 of our CIFAR100 experiment (Table 2). In this case, the small-batch phase uses a single worker and samples the models sequentially. SWA is able to reach the test accuracy of a small-batch run but requires more than three times longer than SWAP to compute the model (see Table 4, row 2). An illustration of the learning rate schedule is provided in Figure 6b.
Small-batch SWA and SWAP: We start the SWA cyclic learning rate schedule from the best model found by solely small-batch training (table 2, row 1). Since the cycle length and cycle count are fixed, the only free parameter is the peak learning rate. We select this using a grid-search. Once the SWA schedule is specified, we re-use the peak learning rate settings in SWAP. We start phase two from the model that was generated as the output of phase 1 for the experiment on section 5.1 reported on table 2 rows 3 and 4. With these settings, small-batch SWA achieves better accuracy than SWAP (by around ∼ 0.9%) at 6.8x more training time. Next, we wish to explore the speed-up that SWAP achieves over SWA if the precision of SWA is set as a target. To that end, we relax the constraints on SWAP. By increasing the phase two schedule from one 10 epoch cycle to two 20 epoch cycles and sampling two models from each worker (16 models) the resulting model achieved a test accuracy of 79.11% in 241 seconds or 3.5x less time.
4Available at https://github.com/cybertronai/imagenet18 old
6 CONCLUSIONS AND FUTURE WORK
We propose Stochastic Weight Averaging in Parallel (SWAP), an algorithm that uses a variant of Stochastic Weight Averaging (SWA) to improve the generalization performance of a model trained with large mini-batches. Our algorithm uses large mini-batches to compute an approximate solution quickly and then refines it by averaging the weights of multiple models trained using small-batches. The final model obtained after averaging has good generalization performance and is trained in a shorter time. We believe that this variant and this application of SWA are novel.
We observed that using large-batches in the initial stages of training does not preclude the models from achieving good generalization performance. That is, by refining the output of a large-batch run, with models sampled sequentially as in SWA or in parallel as in SWAP, the resulting model is able to perform as well as the models trained using small-batches only. We confirm this in the image classification datasets CIFAR10, CIFAR100, and ImageNet.
Through visualizations, we complement the existing evidence that averaged weights are closer to the center of a training loss basin than the models produced by stochastic gradient descent. It’s interesting to note that the basin into which the large mini-batch run is converging to seems to be the same basin where the refined models are found. So, it is possible that regions with bad and good generalization performance are connected through regions of low training loss and, more so, that both belong to an almost convex basin. Our method requires the choice of (at least) one more hyperparameter: the transition point between the large-batch and small-batch. For our experiments, we chose this by using a grid search. A principled method to choose the transition point will be the focus of future work.
In future work we intend to explore the behavior of SWAP when used with other optimization schemes, such as Layer-wise Adaptive Rate Scaling (LARS) (You et al., 2017), mixed-precision training Jia et al. (2018), post-local SGD (Lin et al., 2018) or NovoGrad (Ginsburg et al., 2019). The design of SWAP allows us to substitute any of these for the large-batch stage, for example, we can use local SGD to accelerate the first stage of SWAP by reducing the communication overhead.
A HYPERPARAMETERS FOR CIFAR10 AND CIFAR100 EXPERIMENTS
We provide the parameters used in the experiments of Section 5.1. These were obtained by doing independent grid searches for each experiment. For all CIFAR experiments, the momentum and weight decay constants were kept at 0.9 and 5×10−4 respectively. Tables 5 and 6 list the remaining hyperparameters. When a stopping accuracy of 100% is listed, we mean that the maximum number of epochs were used. | 1. What is the purpose of computing averages in the local models in worker 1, 2, 3 when they can achieve the same testing accuracy without averaging?
2. Why do the authors claim that other training schemes require more hyperparameter tuning specific to the dataset, while the proposed method requires tuning the switch point between phase 1 and phase 2?
3. Is it necessary to use warmup for small batch sizes in the experiment?
4. Should layer-wise learning rate scaling be used in large batch training for improved performance?
5. How can the switch point between phase 1 and phase 2 be selected effectively, and should the authors provide guidelines for this process? | Review | Review
In the paper, the authors propose a novel three-stage training strategy for deep learning models, training using large batch, training using small-batch locally and then aggregate models. Experimental results show that the proposed method converges faster than compared methods. I have the following concerns:
1) In figure 1, It looks like that the local models in worker 1, 2, 3 can reach the same testing accuracy without average. What is the meaning of computing average in this case?
2) In the paper, authors mentioned that “Note that there exist training schemes in the literature that train on even larger batch sizes such as 32k (You et al., 2017; Jia et al., 2018), but these methods require a lot of hyperparameter tuning specific to the dataset.” As far as I know, they just need to tune warmup steps and peak learning rate, which is also required in the paper. In the proposed method, it is required to tune the switch point between phase 1 and phase 2.
3) In the experiment, it also uses warmup for small batch size, is it necessary?
4) Does the large batch training use layer-wise learning rate scaling? From my point of view, it is better to use it in the large-batch training.
5) A guideline about how to select the switch point between phase 1 and phase 2 should be given if it takes time to tune it. |
ICLR | Title
Neural Networks Playing Dough: Investigating Deep Cognition With a Gradient-Based Adversarial Attack
Abstract
Discovering adversarial examples has shaken our trust in the reliability of deep learning. Even though brilliant works have been devoted to understanding and fixing this vulnerability, fundamental questions (e.g. adversarial transferability) remain unanswered. This paper tests the hypothesis that it is not the neural networks failing in learning that causes adversarial vulnerability, but their different perception of the presented data. And therefore, adversarial examples should be semantic-sensitive signals which can provide us with an exceptional opening to understanding the networks’ learning. To investigate this hypothesis, I performed a gradient-based attack on fully connected feed-forward and convolutional neural networks, instructing them to minimally evolve controlled inputs into adversarial examples for all the classes of the MNIST and Fashion-MNIST datasets. Then I abstracted adversarial perturbations from these examples. The perturbations unveiled vivid and recurring visual structures, unique to each class and persistent over parameters of abstraction methods, model architectures, and training configurations. Furthermore, these patterns proved to be explainable and derivable from the corresponding dataset. This finding explains the generalizability of adversarial examples by, semantically, tying them to the datasets. In conclusion, this experiment not only resists interpretation of adversarial examples as deep learning failure but on the contrary, demystifies them in the form of supporting evidence for the authentic learning capacity of networks.
1 INTRODUCTION
Szegedy et al. (2013) introduced the term “ADversarial Examples,” (ADEs) and with that they unveiled a terrifyingly effortless technique for fooling highly accurate neural networks into ludicrous misclassifications of obvious images. However, it took the field four years and a small sticker patch featuring some alien curves with metallic shine (Brown et al., 2017) before realizing how dire the situation could get, in a time when we unlock our phones (Bryliuk & Starovoitov, 2002) and law enforcement identifies suspected criminals (Garvie, 2016) with the very same technology.
ADEs, typically, are copies of the images that a network can classify correctly with a high confidence, plus Adversarial Perturbations (APs) indiscernible to human eyes which misleads the network to classify them incorrectly with an even higher confidence. Inspirational works have contributed to understanding the nature of ADEs. The early speculations were about the high dimensionality of data (Szegedy et al., 2013) and then the linearity of learning in networks (Goodfellow et al., 2014). But these explanations fall short of addressing the phenomenon known as adversarial transferability. This refers to the fact that elusiveness of ADEs created using one subset of a dataset with one network can also fool other models trained on different subsets. Later on, Jo & Bengio (2017) and Ilyas et al. (2019) proposed different accounts that consider, respectively, statistical regularities and non-robust features of datasets as the nature of ADEs. These suggestions offer grounds to understand the transferability. However, in the lack of an agreed-upon explanation, there is still spacious room to explore.
The present study shifts the focus from the ADEs to the APs, and is based on the hypothesis that adversarial vulnerability is not a network weakness, but a consequence of the different ways of per-
ceiving datasets by humans and by the networks. In that respect, the APs represent the networks’ genuine perception of datasets. Therefore, studying APs could be a rare opportunity for understanding the learning process in networks, and it may also answer our lingering questions about the ADEs. To this end, I used a gradient-based attack to produce ADEs for the MNIST (LeCun et al., 2010) and Fashion-MNIST (Xiao et al., 2017) (F-MNIST) datases. Then I abstracted the APs, visualized them, and investigated if there is a way to explain them with the datasets; not with some statistical or non-robust features, but entirely through high level semantics of datasets.
2 METHODS
2.1 GENERATING ADVERSARIAL EXAMPLES
For generating ADEs, I performed a simple gradient-based adversarial attack, introduced by Szegedy et al. (2013). In this method, a trained network runs gradient descent optimization on the input instead of the weights, and treats the weights as non-trainable parameters. The purpose is to alter the input image to what the network would classify with a (commonly nonsensical) target label. To keep the change at a minimum level and limited to the most decisive ones, I defined the loss function as the distance between the network prediction and the target label, omitting any further constraint on the final visual appearance of the input. For having the freedom to explore the models behavior, rather than the traditional “target classification,” I set a disjunctive stop condition for terminating the network optimization loop. The stop condition consists of: a maximum loss threshold, that is the loss function value calculated between the optimized input prediction and the target label; a minimum level of confidence, for the network prediction of the optimized input; and finally a maximum number of iterations counting the rounds of optimization of the input, initially designed to avoid the infinite loop.
2.2 ABSTRACTING THE ADVERSARIAL PERTURBATIONS
Regarding the network acuity with raw data compared to the weak visual perception in humans, APs are invisible to us. Therefore to embolden them, I took one direct and two indirect approaches. In the direct approach (zero feed), I fed the networks with a zero array. When the input is zero, the modified array would be purely AP. Intended on verifying the reproducibility of the patterns obtained with the direct method, I also generated ADEs with normal noise as input. The drawback is that with the noise input, the ADEs will be noisy as well, and the signal-to-noise ratio is so poor that no patterns would be visible. Therefore, in one indirect approach (noise feed), I simply subtracted the initial noise input from the final ADE to harvest the AP. And in the next indirect method (Adversarial Average Image (AAI)), I aimed to accentuate the APs by computing cumulative adversarial Images. In doing so, I generated ADEs in batches of size, generally, 30 and added them up into one AAI, hoping for the noise to cancel itself out while the AP is getting augmented.
To compare the APs produced with these three methods, I trained a Fully Connected feed-forward (FC) network [784, 200, 30, 10] with a Stochastic Gradient Descent (SGD) optimizer on the MNIST dataset, up to the validation accuracy of 0.92 and loss value of 0.27. After some trial and error, I
found that the maximum iterations is the most efficient stop condition since the networks could engineer ADEs for some classes dramatically faster than the others and that delivers the APs still visually incomprehensible; that is especially the case with the AAI method. As a result, for generating ADEs in all three methods, the only stop condition I applied was the maximum iterations, set to 1,000. I then computed APs for all 10 classes, but did not normalize them. The reason being that the results are presented visually, using the pyplot.imshow method of the Matplotlib library (Hunter, 2007) which by default normalizes images to their minimum and maximum values, and for the purpose of this study, it is contrast that matters and not the true values of the shades.
2.3 INVESTIGATING THE PATTERNS’ ROBUSTNESS TO THE ABSTRACTION PARAMETERS
To investigate whether the AP patterns are independent of the abstraction methods, I extracted APs from several ADEs, with a range of different parameters for each approach. For generating ADEs, I used the aforementioned FC architecture and trained it with the same optimizer on the MNIST dataset; this time up to a validation accuracy of 0.94 and the loss value of 0.19. Then I computed APs for the class 5, with the zero feed method with the stop condition set at three levels of confidence: 80, 90, and >99%; with the noise feed method with the stop condition set at three values of loss: 1, 1e-2, and 1e-4; and with the AAI method three times, as well, with averaging over 10, 100, and 1,000 ADEs.
2.4 INVESTIGATING THE PATTERNS’ ROBUSTNESS TO THE MODEL AND TRAINING PARAMETERS
Furthermore, to verify the persistence of the AP patterns over the network architecture and the training configuration, in addition to the mentioned FC, I also tested a Convolutional Neural Network (CNN) [convolutional layers: 128, 256 followed by dense layers: 512, 256, 10]. I trained the two models with six configurations, combined, this time on both MNIST and also F-MNIST datasets (See Table 1). It is to be noted that I did not try to train the networks to their fullest learning capacity. In fact, as the complexity of the architecture and training grew, I reduced the number of training
FC SGD 0.95 0.19 0.85 0.42
epochs due to the fact that generating ADEs with highly accurate and complicated architectures became adversely expensive from a computational point of view. After training the networks, I generated APs only with the zero and noise feed methods, given that the AAI approach is considerably more time-consuming. For presenting the results, I hand-picked one of the two methods based on the clarity of the patterns and trying to include more diversity.
2.5 ABSTRACTING THE ADVERSARIAL PATTERNS DIRECTLY FROM THE DATASETS
The sensible topology of the AP patterns, paired with the fact that in the classification tasks, deep learning forms in categories, motivated me to investigate the hypothesis that these patterns represent a categorical perception of the classes in the dataset, in which a concept is introduced not only by the instances of what it is, but also equally importantly, through the instances of what it is not. To test this hypothesis, for each class of the MNIST and F-MNIST datasets, I theoretically divided the corresponding database into a positive set (all samples belonging to that class) and a negative set (all samples belonging to the other classes). Next, I computed the average images for both sets and normalized them between 0 and 1. And finally, I calculated the Positive-Negative Contrast image (PNC) by subtracting the negative average image from the positive one.
2.6 CONFIRMING THE RESULTS WITH TWO DUMMY DATASETS
To inspect the categorical learning hypothesis further, I created two dummy datasets each with four classes, each class with one sample; utterly simplistic to secure a perfect learning by networks. The first one is the tiling dataset with four classes of a square patch at each corner of the image, that together they tile the image space. For the tiling dataset, all PNCs precisely match their corresponding class (see Fig. 5). And the second one, the overlapping dataset, with the same four classes of the square patches, but with shifted positions in a way that, if put together, they overlap. Therefore, the PNCs differ from their corresponding class (see Fig. 6). I computed the PNCs and APs for these two datasets using an FC [36, 200, 50, 4] trained with SGD up to the validation accuracy of 100% and loss value of 5.8e-4.
2.7 COMPUTATIONAL RESOURCE
I ran all the computations for this study on an NVIDIA GPU (GeForce® GTX 1070). On this system, generating an AAI with 30 ADEs, with a stop condition of maximum iteration set to 1,000 took 5 minutes on average. For an AP with either zero or noise method, with the same stop condition, the duration plummeted to only 5 seconds.
3 RESULTS
The three methods for obtaining APs produced almost identical patterns, all of which the network classified with the target labels and a remarkably high confidence (see Fig. 1).
These patterns, as soon as they emerge, are indifferent to the parameters of the method used for generating them (see Fig. 2). Furthermore, the patterns persist over a range of network architectures
and training configurations. However, as the complexity of the architectures and optimizers grow, the clarity of the patterns deteriorate. Even so, the less intelligible APs highlight some features of the same patterns, which would be easier to place in the context of the other perturbations for the same class (see Fig. 3 & 4). And finally, as Fig. 3 and 4 show the patterns approximate their corresponding PNCs, with a striking accuracy in simpler models and training plans. The experiment with the tiling and overlapping datasets back this finding (see Fig. 5 & 6).
4 DISCUSSION
When children play with dough, they try to embody the concepts that they are learning day by day. And by observing their lively artifacts and the level of worked details, one can infer about the content and the quality of their learning. This study evolved around the idea that the same will apply to a network, if it tries to remold a controlled input into the concepts it learned from a dataset. That is, considering a gradient-based adversarial attack as a set of instructions asking a network to generate something that it perceives as an instance of a target class, a zero array or a random noise input would function as a lump of playing dough.
Previous studies (Erhan et al., 2009; Simonyan et al., 2013; Nguyen et al., 2015) have exercised powerful methods, like gradient ascent and evolutionary algorithms, for having a visual insight into the content that a trained network learns from a dataset. However, these methods, compared to the adversarial attack practiced in this paper, either are intrusive or unnecessarily coach the networks with some extra information about the target class. While all the information that the gradient-based attack provides the networks with, is the nominal value of the target class. Therefore, any changes made to the input are genuinely devised by the network. For that matter, the AP patterns could be the closest approximation to a network’s perception and this study’s results confirm this.
4.1 NEURAL NETWORKS DO LEARN
The robustness of the patterns to the abstraction parameters tells us whatever these patterns are, they are neither trivial byproducts of the methods, nor random noises varying from one ADE to another. Rather they must be a so-to-speak cognition, emerged from the network-dataset dyad.
One step forward, the patterns’ robustness to the network architectures and training configurations, along with the uniqueness of AP patterns to each class, further validates the assumption that we can reliably take these patterns as the learned content by a trained network. On top of that, this finding justifies the adversarial transferability with explaining APs as responsive to the content of datasets on some level of semantics.
4.2 BUT THEY SEE DIFFERENTLY
Furthermore, inspecting the AP patterns revealed a remarkable resemblance between them and the PNC patterns that makes perfect sense regarding the categorical nature of learning in networks. This has two main implications. First, in sharp contrast with the Potemkin analogy (Goodfellow et al., 2014), ADEs contain patterns which are evidence of the networks’ capacity to learn, and to learn high level semantics. And the adversarial vulnerability is only a side effect of categorical learning. That is, when a network learns about a concept, instead of exclusively focusing on the features of the concept itself, it equally relies on the characteristics of all other existing concepts in the dataset that are not the target concept. Therefore, when the negative set is too sparse (for example, nine digit-shapes in MNIST case) compared to the enormously spacious space that we define for the network (a 784-dimensional space in MNIST case) the positive and negative sets fail to converge on a matching concept. That in turn, causes the discrepancy of perception between the networks and humans; while we constantly forget that we have the privilege to learn in the context of a world flourishing with notions and concepts. In fact, Szegedy et al. (2013), in the very same paper that introduced ADEs for the first time, righteously although partially, addressed this issue by mentioning the high dimensionality of data as a possible reason behind the adversarial vulnerability.
In addition, my experiment with the dummy datasets supports this rationale. The two datasets, which have their minituristic space occupied by comparatively gigantic objects, are almost identical regarding the degree of simplicity. However, while we can generate ADEs for the overlapping dataset, the same attack goes nowhere but to replicate almost the perfect examples of target classes with the tiling dataset.
The second implication of this finding is that on one hand we can estimate PNCs with the AP patterns. On the other hand, it is computationally possible to break down PNCs to the average arrays of all classes of the corresponding dataset. That makes a pipeline from making queries with zero or noise feed to a blackbox model, to the average vectors of the training dataset. Even though the pro-
posed methods in this paper are not capable of delivering elaborate PNCs, the doors of opportunities (or misapplication, to be more accurate) are wide open.
4.3 LIMITATIONS
While the results of this study stand on their own, it is not trivial matter that I only worked with the small-sized FC and CNN models trained on two simple grayscale datasets.
Both MNIST and F-MNIST datasets include samples with center-aligned shapes against a white background which makes it easy to calculate average images and manifest PNCs. But whether more complicated datasets with color images that introduce, for example, instances of a dog all over the input area can yield some sorts of PNCs or not, to validate this we arguably need a different strategy rather than the simple method I used. Moreover, with increased complexity of the network architectures and their close-to-perfect accuracy (especially on the simple datasets), it becomes computationally costly and more time-consuming to generate ADEs or obtain lucid AP patterns with the particular method exercised in this study.
However, my primary efforts which failed to derive supportive results with the ResNet-50 (He et al., 2016) and mobileNetV2 (Sandler et al., 2018) models trained on the ImageNet dataset (Deng et al., 2009), bruise my confidence in the existence of a shared learning strategy among all models and datasets. | 1. What is the main contribution of the paper on neural networks and deep cognition?
2. What are the strengths and weaknesses of the author's approach to investigating adversarial perturbations empirically?
3. How does the reviewer assess the significance and novelty of the found patterns in the context of the high dimensionality of the data and the neural network parameters?
4. What are the limitations of the investigation regarding the choice of optimization schemes, initialization methods, and loss functions?
5. How do the found patterns compare to usual adversarial examples, and what implications does this have for understanding the perception of neural networks?
6. Can the presented method recover semantic content for more complex datasets like CIFAR10?
7. What are some specific concerns or suggestions for improving the experiment design and analysis? | Summary Of The Paper
Review | Summary Of The Paper
The submission "Neural Networks Playing Dough: Investigating Deep Cognition With a Gradient-Based Adversarial Attack" investigates adversarial perturbations empirically. For small FCNs and CNNs on MNIST/FMNIST, the author finds data points with minimal target loss to for each class by gradient descent based on different initializations. These data points can contain semantic meaning and it is suggested that these patterns closely approximate the perception of these neural networks.
Review
Overall this is a diligent submission and I commend the author for thinking outside the box, however I do think that the presented evidence, while interesting, is not substantial enough to warrant the conclusions drawn from it.
My main concern is that this work, while it highlights the high dimensionality of the data, underestimates the high dimensionality of the neural network in terms of its parameters. Paraphrasing this submission from a very broad vantage point, to me, it appears to show that first order optimization can (sometimes) find data points with minimal loss that are also semantically meaningful (i.e. they contain low-frequency structures and not just noise). Different initializations can further (sometimes) find semantically similar data points. This is interesting, but due to the high dimensionality of the investigated neural networks not too suprising. These data points are only a small subset of all possible data points with low loss; that neural networks assign low loss to these data points is not a strong argument toward their perception as many other (adversarial) examples exist that are also "perceived" similarly, meaning they also have a low loss. The author investigates only gradient descent (with a fixed step size that appears to be 1.0 from the code(?)), but other algorithms could find other data points with minimal loss and other characteristics - for example using signed gradient descent, a variety of step sizes, loss function heuristics, and other initialization schemes. On the other hand, these data points could be made more semantically meaningful by initializing them with examples from the training set, the neural network is already trained to assign low loss to these examples, no further optimization is necessary, yet they can also be characterized as solutions of the given optimization problem. In light of these possible variations, the actual patterns seem arbitrary and an artefact of the optimization scheme used to find them.
I further do not think that the investigated data points are good examples of "adversarial perturbations" (and I have eschewed calling them such above). The investigated data points with minimal loss are found by unconstrained optimization in the image space. This makes them qualitatively different from "usual" adversarial examples! Usual adversarial examples are constrained in some metric in image space. For example, using the most common example, a small
l
∞
bound around some data point. From the size of this bound it is clear that there exists no semantically meaningful example within this bound that should be classified as the adversarial label. As such the perturbation really is adversarial and outside of context that is meaningful to humans. However, the investigated data points are unconstrained and, as discussed, a multitude of meaningful examples are also possible solutions, such as the entire training set.
The submission already finds that "as the complexity of the architectures and optimizers grow, the clarity of the patterns deteriorate", and I would argue that this is a result of the fact that a variety of possible patterns already exist in any case and as the dimensionality of the model increases, the chosen optimization schemes will result in disparate patterns even quicker than before.
Minor Comments:
"APs are invisible to us" - but this does not apply to the datapoints found in this investigation. Rather it is possible to find data points which have bother low loss and are "invisible" in some metric.
All patterns in this work are further found for MNIST/FMNIST which are centered and sufficiently simple datasets without background. Would the presented method recover semantic content for CIFAR10?
Figures 3 & 4 to me really show that the observed pattern is too specific to the proposed process to find it for it to be in some sense universal to a given neural network.
The experiment with tiling do not really disprove this. The tiling task is easy enough that a very simple (and even linear?) classifier can be learned by the neural network to match the observed behavior. That this classifier is then more robust, and thus leads to more meaningful perturbations, is a statement about the inherent robustness of the decision boundary of such a model and, to me, not about qualities of these datapoints.
Some concrete examples of non-semantic data points that lead to arbitrary classifications are universal adversarial perturbations (as e.g. in https://arxiv.org/pdf/1610.08401.pdf and follow-up work), these perturbations are much smaller than the unbounded example investigated here and essentially devoid of semantic content.
The "distance" in 2.1 between prediction and target is CrossEntropy (from looking at the code). This would be good to mention. |
ICLR | Title
Neural Networks Playing Dough: Investigating Deep Cognition With a Gradient-Based Adversarial Attack
Abstract
Discovering adversarial examples has shaken our trust in the reliability of deep learning. Even though brilliant works have been devoted to understanding and fixing this vulnerability, fundamental questions (e.g. adversarial transferability) remain unanswered. This paper tests the hypothesis that it is not the neural networks failing in learning that causes adversarial vulnerability, but their different perception of the presented data. And therefore, adversarial examples should be semantic-sensitive signals which can provide us with an exceptional opening to understanding the networks’ learning. To investigate this hypothesis, I performed a gradient-based attack on fully connected feed-forward and convolutional neural networks, instructing them to minimally evolve controlled inputs into adversarial examples for all the classes of the MNIST and Fashion-MNIST datasets. Then I abstracted adversarial perturbations from these examples. The perturbations unveiled vivid and recurring visual structures, unique to each class and persistent over parameters of abstraction methods, model architectures, and training configurations. Furthermore, these patterns proved to be explainable and derivable from the corresponding dataset. This finding explains the generalizability of adversarial examples by, semantically, tying them to the datasets. In conclusion, this experiment not only resists interpretation of adversarial examples as deep learning failure but on the contrary, demystifies them in the form of supporting evidence for the authentic learning capacity of networks.
1 INTRODUCTION
Szegedy et al. (2013) introduced the term “ADversarial Examples,” (ADEs) and with that they unveiled a terrifyingly effortless technique for fooling highly accurate neural networks into ludicrous misclassifications of obvious images. However, it took the field four years and a small sticker patch featuring some alien curves with metallic shine (Brown et al., 2017) before realizing how dire the situation could get, in a time when we unlock our phones (Bryliuk & Starovoitov, 2002) and law enforcement identifies suspected criminals (Garvie, 2016) with the very same technology.
ADEs, typically, are copies of the images that a network can classify correctly with a high confidence, plus Adversarial Perturbations (APs) indiscernible to human eyes which misleads the network to classify them incorrectly with an even higher confidence. Inspirational works have contributed to understanding the nature of ADEs. The early speculations were about the high dimensionality of data (Szegedy et al., 2013) and then the linearity of learning in networks (Goodfellow et al., 2014). But these explanations fall short of addressing the phenomenon known as adversarial transferability. This refers to the fact that elusiveness of ADEs created using one subset of a dataset with one network can also fool other models trained on different subsets. Later on, Jo & Bengio (2017) and Ilyas et al. (2019) proposed different accounts that consider, respectively, statistical regularities and non-robust features of datasets as the nature of ADEs. These suggestions offer grounds to understand the transferability. However, in the lack of an agreed-upon explanation, there is still spacious room to explore.
The present study shifts the focus from the ADEs to the APs, and is based on the hypothesis that adversarial vulnerability is not a network weakness, but a consequence of the different ways of per-
ceiving datasets by humans and by the networks. In that respect, the APs represent the networks’ genuine perception of datasets. Therefore, studying APs could be a rare opportunity for understanding the learning process in networks, and it may also answer our lingering questions about the ADEs. To this end, I used a gradient-based attack to produce ADEs for the MNIST (LeCun et al., 2010) and Fashion-MNIST (Xiao et al., 2017) (F-MNIST) datases. Then I abstracted the APs, visualized them, and investigated if there is a way to explain them with the datasets; not with some statistical or non-robust features, but entirely through high level semantics of datasets.
2 METHODS
2.1 GENERATING ADVERSARIAL EXAMPLES
For generating ADEs, I performed a simple gradient-based adversarial attack, introduced by Szegedy et al. (2013). In this method, a trained network runs gradient descent optimization on the input instead of the weights, and treats the weights as non-trainable parameters. The purpose is to alter the input image to what the network would classify with a (commonly nonsensical) target label. To keep the change at a minimum level and limited to the most decisive ones, I defined the loss function as the distance between the network prediction and the target label, omitting any further constraint on the final visual appearance of the input. For having the freedom to explore the models behavior, rather than the traditional “target classification,” I set a disjunctive stop condition for terminating the network optimization loop. The stop condition consists of: a maximum loss threshold, that is the loss function value calculated between the optimized input prediction and the target label; a minimum level of confidence, for the network prediction of the optimized input; and finally a maximum number of iterations counting the rounds of optimization of the input, initially designed to avoid the infinite loop.
2.2 ABSTRACTING THE ADVERSARIAL PERTURBATIONS
Regarding the network acuity with raw data compared to the weak visual perception in humans, APs are invisible to us. Therefore to embolden them, I took one direct and two indirect approaches. In the direct approach (zero feed), I fed the networks with a zero array. When the input is zero, the modified array would be purely AP. Intended on verifying the reproducibility of the patterns obtained with the direct method, I also generated ADEs with normal noise as input. The drawback is that with the noise input, the ADEs will be noisy as well, and the signal-to-noise ratio is so poor that no patterns would be visible. Therefore, in one indirect approach (noise feed), I simply subtracted the initial noise input from the final ADE to harvest the AP. And in the next indirect method (Adversarial Average Image (AAI)), I aimed to accentuate the APs by computing cumulative adversarial Images. In doing so, I generated ADEs in batches of size, generally, 30 and added them up into one AAI, hoping for the noise to cancel itself out while the AP is getting augmented.
To compare the APs produced with these three methods, I trained a Fully Connected feed-forward (FC) network [784, 200, 30, 10] with a Stochastic Gradient Descent (SGD) optimizer on the MNIST dataset, up to the validation accuracy of 0.92 and loss value of 0.27. After some trial and error, I
found that the maximum iterations is the most efficient stop condition since the networks could engineer ADEs for some classes dramatically faster than the others and that delivers the APs still visually incomprehensible; that is especially the case with the AAI method. As a result, for generating ADEs in all three methods, the only stop condition I applied was the maximum iterations, set to 1,000. I then computed APs for all 10 classes, but did not normalize them. The reason being that the results are presented visually, using the pyplot.imshow method of the Matplotlib library (Hunter, 2007) which by default normalizes images to their minimum and maximum values, and for the purpose of this study, it is contrast that matters and not the true values of the shades.
2.3 INVESTIGATING THE PATTERNS’ ROBUSTNESS TO THE ABSTRACTION PARAMETERS
To investigate whether the AP patterns are independent of the abstraction methods, I extracted APs from several ADEs, with a range of different parameters for each approach. For generating ADEs, I used the aforementioned FC architecture and trained it with the same optimizer on the MNIST dataset; this time up to a validation accuracy of 0.94 and the loss value of 0.19. Then I computed APs for the class 5, with the zero feed method with the stop condition set at three levels of confidence: 80, 90, and >99%; with the noise feed method with the stop condition set at three values of loss: 1, 1e-2, and 1e-4; and with the AAI method three times, as well, with averaging over 10, 100, and 1,000 ADEs.
2.4 INVESTIGATING THE PATTERNS’ ROBUSTNESS TO THE MODEL AND TRAINING PARAMETERS
Furthermore, to verify the persistence of the AP patterns over the network architecture and the training configuration, in addition to the mentioned FC, I also tested a Convolutional Neural Network (CNN) [convolutional layers: 128, 256 followed by dense layers: 512, 256, 10]. I trained the two models with six configurations, combined, this time on both MNIST and also F-MNIST datasets (See Table 1). It is to be noted that I did not try to train the networks to their fullest learning capacity. In fact, as the complexity of the architecture and training grew, I reduced the number of training
FC SGD 0.95 0.19 0.85 0.42
epochs due to the fact that generating ADEs with highly accurate and complicated architectures became adversely expensive from a computational point of view. After training the networks, I generated APs only with the zero and noise feed methods, given that the AAI approach is considerably more time-consuming. For presenting the results, I hand-picked one of the two methods based on the clarity of the patterns and trying to include more diversity.
2.5 ABSTRACTING THE ADVERSARIAL PATTERNS DIRECTLY FROM THE DATASETS
The sensible topology of the AP patterns, paired with the fact that in the classification tasks, deep learning forms in categories, motivated me to investigate the hypothesis that these patterns represent a categorical perception of the classes in the dataset, in which a concept is introduced not only by the instances of what it is, but also equally importantly, through the instances of what it is not. To test this hypothesis, for each class of the MNIST and F-MNIST datasets, I theoretically divided the corresponding database into a positive set (all samples belonging to that class) and a negative set (all samples belonging to the other classes). Next, I computed the average images for both sets and normalized them between 0 and 1. And finally, I calculated the Positive-Negative Contrast image (PNC) by subtracting the negative average image from the positive one.
2.6 CONFIRMING THE RESULTS WITH TWO DUMMY DATASETS
To inspect the categorical learning hypothesis further, I created two dummy datasets each with four classes, each class with one sample; utterly simplistic to secure a perfect learning by networks. The first one is the tiling dataset with four classes of a square patch at each corner of the image, that together they tile the image space. For the tiling dataset, all PNCs precisely match their corresponding class (see Fig. 5). And the second one, the overlapping dataset, with the same four classes of the square patches, but with shifted positions in a way that, if put together, they overlap. Therefore, the PNCs differ from their corresponding class (see Fig. 6). I computed the PNCs and APs for these two datasets using an FC [36, 200, 50, 4] trained with SGD up to the validation accuracy of 100% and loss value of 5.8e-4.
2.7 COMPUTATIONAL RESOURCE
I ran all the computations for this study on an NVIDIA GPU (GeForce® GTX 1070). On this system, generating an AAI with 30 ADEs, with a stop condition of maximum iteration set to 1,000 took 5 minutes on average. For an AP with either zero or noise method, with the same stop condition, the duration plummeted to only 5 seconds.
3 RESULTS
The three methods for obtaining APs produced almost identical patterns, all of which the network classified with the target labels and a remarkably high confidence (see Fig. 1).
These patterns, as soon as they emerge, are indifferent to the parameters of the method used for generating them (see Fig. 2). Furthermore, the patterns persist over a range of network architectures
and training configurations. However, as the complexity of the architectures and optimizers grow, the clarity of the patterns deteriorate. Even so, the less intelligible APs highlight some features of the same patterns, which would be easier to place in the context of the other perturbations for the same class (see Fig. 3 & 4). And finally, as Fig. 3 and 4 show the patterns approximate their corresponding PNCs, with a striking accuracy in simpler models and training plans. The experiment with the tiling and overlapping datasets back this finding (see Fig. 5 & 6).
4 DISCUSSION
When children play with dough, they try to embody the concepts that they are learning day by day. And by observing their lively artifacts and the level of worked details, one can infer about the content and the quality of their learning. This study evolved around the idea that the same will apply to a network, if it tries to remold a controlled input into the concepts it learned from a dataset. That is, considering a gradient-based adversarial attack as a set of instructions asking a network to generate something that it perceives as an instance of a target class, a zero array or a random noise input would function as a lump of playing dough.
Previous studies (Erhan et al., 2009; Simonyan et al., 2013; Nguyen et al., 2015) have exercised powerful methods, like gradient ascent and evolutionary algorithms, for having a visual insight into the content that a trained network learns from a dataset. However, these methods, compared to the adversarial attack practiced in this paper, either are intrusive or unnecessarily coach the networks with some extra information about the target class. While all the information that the gradient-based attack provides the networks with, is the nominal value of the target class. Therefore, any changes made to the input are genuinely devised by the network. For that matter, the AP patterns could be the closest approximation to a network’s perception and this study’s results confirm this.
4.1 NEURAL NETWORKS DO LEARN
The robustness of the patterns to the abstraction parameters tells us whatever these patterns are, they are neither trivial byproducts of the methods, nor random noises varying from one ADE to another. Rather they must be a so-to-speak cognition, emerged from the network-dataset dyad.
One step forward, the patterns’ robustness to the network architectures and training configurations, along with the uniqueness of AP patterns to each class, further validates the assumption that we can reliably take these patterns as the learned content by a trained network. On top of that, this finding justifies the adversarial transferability with explaining APs as responsive to the content of datasets on some level of semantics.
4.2 BUT THEY SEE DIFFERENTLY
Furthermore, inspecting the AP patterns revealed a remarkable resemblance between them and the PNC patterns that makes perfect sense regarding the categorical nature of learning in networks. This has two main implications. First, in sharp contrast with the Potemkin analogy (Goodfellow et al., 2014), ADEs contain patterns which are evidence of the networks’ capacity to learn, and to learn high level semantics. And the adversarial vulnerability is only a side effect of categorical learning. That is, when a network learns about a concept, instead of exclusively focusing on the features of the concept itself, it equally relies on the characteristics of all other existing concepts in the dataset that are not the target concept. Therefore, when the negative set is too sparse (for example, nine digit-shapes in MNIST case) compared to the enormously spacious space that we define for the network (a 784-dimensional space in MNIST case) the positive and negative sets fail to converge on a matching concept. That in turn, causes the discrepancy of perception between the networks and humans; while we constantly forget that we have the privilege to learn in the context of a world flourishing with notions and concepts. In fact, Szegedy et al. (2013), in the very same paper that introduced ADEs for the first time, righteously although partially, addressed this issue by mentioning the high dimensionality of data as a possible reason behind the adversarial vulnerability.
In addition, my experiment with the dummy datasets supports this rationale. The two datasets, which have their minituristic space occupied by comparatively gigantic objects, are almost identical regarding the degree of simplicity. However, while we can generate ADEs for the overlapping dataset, the same attack goes nowhere but to replicate almost the perfect examples of target classes with the tiling dataset.
The second implication of this finding is that on one hand we can estimate PNCs with the AP patterns. On the other hand, it is computationally possible to break down PNCs to the average arrays of all classes of the corresponding dataset. That makes a pipeline from making queries with zero or noise feed to a blackbox model, to the average vectors of the training dataset. Even though the pro-
posed methods in this paper are not capable of delivering elaborate PNCs, the doors of opportunities (or misapplication, to be more accurate) are wide open.
4.3 LIMITATIONS
While the results of this study stand on their own, it is not trivial matter that I only worked with the small-sized FC and CNN models trained on two simple grayscale datasets.
Both MNIST and F-MNIST datasets include samples with center-aligned shapes against a white background which makes it easy to calculate average images and manifest PNCs. But whether more complicated datasets with color images that introduce, for example, instances of a dog all over the input area can yield some sorts of PNCs or not, to validate this we arguably need a different strategy rather than the simple method I used. Moreover, with increased complexity of the network architectures and their close-to-perfect accuracy (especially on the simple datasets), it becomes computationally costly and more time-consuming to generate ADEs or obtain lucid AP patterns with the particular method exercised in this study.
However, my primary efforts which failed to derive supportive results with the ResNet-50 (He et al., 2016) and mobileNetV2 (Sandler et al., 2018) models trained on the ImageNet dataset (Deng et al., 2009), bruise my confidence in the existence of a shared learning strategy among all models and datasets. | 1. What are the main contributions and novel aspects introduced by the paper regarding adversarial vulnerabilities and deep neural networks?
2. How does the reviewer assess the significance and originality of the proposed approach compared to prior works in the field?
3. What are some references cited by the reviewer to support their argument about the similarity between the proposed method and existing research?
4. What are the weaknesses or limitations of the paper according to the reviewer's perspective? | Summary Of The Paper
Review | Summary Of The Paper
The paper hypothesises that the causes of adversarial vulnerabilities aren't due to the failures of Deep Neural Networks (DNNs) but because of how they perceive data. Hence adversarial examples should be thought of as a signal which opens up black-box DNNs. The author also surmises that the adversarial examples can be abstracted to a coarser level, and these abstractions can serve as a summary of the dataset.
Review
In general, work is at a preliminary stage, and many of the ideas discussed in the paper have been explored in the past, albeit in different forms. For instance, the concept of abstracting out adversarial examples at a global level is the core idea of Universal Adversarial Perturbations and the frameworks which build on it. In general, viewing adversarial perturbations as a form of semantic signal that can help open up black-box DNNs [1, 2, 3] and understand their generalization properties have been explored a lot in the literature.
1, Interpretable Explanations of Black Boxes by Meaningful Perturbation.
2, An Empirical Study on the Relation between Network Interpretability and Adversarial Robustness.
3, Regularized adversarial examples for model interpretability.
4, Disentangling Adversarial Robustness and Generalization. |
ICLR | Title
Neural Networks Playing Dough: Investigating Deep Cognition With a Gradient-Based Adversarial Attack
Abstract
Discovering adversarial examples has shaken our trust in the reliability of deep learning. Even though brilliant works have been devoted to understanding and fixing this vulnerability, fundamental questions (e.g. adversarial transferability) remain unanswered. This paper tests the hypothesis that it is not the neural networks failing in learning that causes adversarial vulnerability, but their different perception of the presented data. And therefore, adversarial examples should be semantic-sensitive signals which can provide us with an exceptional opening to understanding the networks’ learning. To investigate this hypothesis, I performed a gradient-based attack on fully connected feed-forward and convolutional neural networks, instructing them to minimally evolve controlled inputs into adversarial examples for all the classes of the MNIST and Fashion-MNIST datasets. Then I abstracted adversarial perturbations from these examples. The perturbations unveiled vivid and recurring visual structures, unique to each class and persistent over parameters of abstraction methods, model architectures, and training configurations. Furthermore, these patterns proved to be explainable and derivable from the corresponding dataset. This finding explains the generalizability of adversarial examples by, semantically, tying them to the datasets. In conclusion, this experiment not only resists interpretation of adversarial examples as deep learning failure but on the contrary, demystifies them in the form of supporting evidence for the authentic learning capacity of networks.
1 INTRODUCTION
Szegedy et al. (2013) introduced the term “ADversarial Examples,” (ADEs) and with that they unveiled a terrifyingly effortless technique for fooling highly accurate neural networks into ludicrous misclassifications of obvious images. However, it took the field four years and a small sticker patch featuring some alien curves with metallic shine (Brown et al., 2017) before realizing how dire the situation could get, in a time when we unlock our phones (Bryliuk & Starovoitov, 2002) and law enforcement identifies suspected criminals (Garvie, 2016) with the very same technology.
ADEs, typically, are copies of the images that a network can classify correctly with a high confidence, plus Adversarial Perturbations (APs) indiscernible to human eyes which misleads the network to classify them incorrectly with an even higher confidence. Inspirational works have contributed to understanding the nature of ADEs. The early speculations were about the high dimensionality of data (Szegedy et al., 2013) and then the linearity of learning in networks (Goodfellow et al., 2014). But these explanations fall short of addressing the phenomenon known as adversarial transferability. This refers to the fact that elusiveness of ADEs created using one subset of a dataset with one network can also fool other models trained on different subsets. Later on, Jo & Bengio (2017) and Ilyas et al. (2019) proposed different accounts that consider, respectively, statistical regularities and non-robust features of datasets as the nature of ADEs. These suggestions offer grounds to understand the transferability. However, in the lack of an agreed-upon explanation, there is still spacious room to explore.
The present study shifts the focus from the ADEs to the APs, and is based on the hypothesis that adversarial vulnerability is not a network weakness, but a consequence of the different ways of per-
ceiving datasets by humans and by the networks. In that respect, the APs represent the networks’ genuine perception of datasets. Therefore, studying APs could be a rare opportunity for understanding the learning process in networks, and it may also answer our lingering questions about the ADEs. To this end, I used a gradient-based attack to produce ADEs for the MNIST (LeCun et al., 2010) and Fashion-MNIST (Xiao et al., 2017) (F-MNIST) datases. Then I abstracted the APs, visualized them, and investigated if there is a way to explain them with the datasets; not with some statistical or non-robust features, but entirely through high level semantics of datasets.
2 METHODS
2.1 GENERATING ADVERSARIAL EXAMPLES
For generating ADEs, I performed a simple gradient-based adversarial attack, introduced by Szegedy et al. (2013). In this method, a trained network runs gradient descent optimization on the input instead of the weights, and treats the weights as non-trainable parameters. The purpose is to alter the input image to what the network would classify with a (commonly nonsensical) target label. To keep the change at a minimum level and limited to the most decisive ones, I defined the loss function as the distance between the network prediction and the target label, omitting any further constraint on the final visual appearance of the input. For having the freedom to explore the models behavior, rather than the traditional “target classification,” I set a disjunctive stop condition for terminating the network optimization loop. The stop condition consists of: a maximum loss threshold, that is the loss function value calculated between the optimized input prediction and the target label; a minimum level of confidence, for the network prediction of the optimized input; and finally a maximum number of iterations counting the rounds of optimization of the input, initially designed to avoid the infinite loop.
2.2 ABSTRACTING THE ADVERSARIAL PERTURBATIONS
Regarding the network acuity with raw data compared to the weak visual perception in humans, APs are invisible to us. Therefore to embolden them, I took one direct and two indirect approaches. In the direct approach (zero feed), I fed the networks with a zero array. When the input is zero, the modified array would be purely AP. Intended on verifying the reproducibility of the patterns obtained with the direct method, I also generated ADEs with normal noise as input. The drawback is that with the noise input, the ADEs will be noisy as well, and the signal-to-noise ratio is so poor that no patterns would be visible. Therefore, in one indirect approach (noise feed), I simply subtracted the initial noise input from the final ADE to harvest the AP. And in the next indirect method (Adversarial Average Image (AAI)), I aimed to accentuate the APs by computing cumulative adversarial Images. In doing so, I generated ADEs in batches of size, generally, 30 and added them up into one AAI, hoping for the noise to cancel itself out while the AP is getting augmented.
To compare the APs produced with these three methods, I trained a Fully Connected feed-forward (FC) network [784, 200, 30, 10] with a Stochastic Gradient Descent (SGD) optimizer on the MNIST dataset, up to the validation accuracy of 0.92 and loss value of 0.27. After some trial and error, I
found that the maximum iterations is the most efficient stop condition since the networks could engineer ADEs for some classes dramatically faster than the others and that delivers the APs still visually incomprehensible; that is especially the case with the AAI method. As a result, for generating ADEs in all three methods, the only stop condition I applied was the maximum iterations, set to 1,000. I then computed APs for all 10 classes, but did not normalize them. The reason being that the results are presented visually, using the pyplot.imshow method of the Matplotlib library (Hunter, 2007) which by default normalizes images to their minimum and maximum values, and for the purpose of this study, it is contrast that matters and not the true values of the shades.
2.3 INVESTIGATING THE PATTERNS’ ROBUSTNESS TO THE ABSTRACTION PARAMETERS
To investigate whether the AP patterns are independent of the abstraction methods, I extracted APs from several ADEs, with a range of different parameters for each approach. For generating ADEs, I used the aforementioned FC architecture and trained it with the same optimizer on the MNIST dataset; this time up to a validation accuracy of 0.94 and the loss value of 0.19. Then I computed APs for the class 5, with the zero feed method with the stop condition set at three levels of confidence: 80, 90, and >99%; with the noise feed method with the stop condition set at three values of loss: 1, 1e-2, and 1e-4; and with the AAI method three times, as well, with averaging over 10, 100, and 1,000 ADEs.
2.4 INVESTIGATING THE PATTERNS’ ROBUSTNESS TO THE MODEL AND TRAINING PARAMETERS
Furthermore, to verify the persistence of the AP patterns over the network architecture and the training configuration, in addition to the mentioned FC, I also tested a Convolutional Neural Network (CNN) [convolutional layers: 128, 256 followed by dense layers: 512, 256, 10]. I trained the two models with six configurations, combined, this time on both MNIST and also F-MNIST datasets (See Table 1). It is to be noted that I did not try to train the networks to their fullest learning capacity. In fact, as the complexity of the architecture and training grew, I reduced the number of training
FC SGD 0.95 0.19 0.85 0.42
epochs due to the fact that generating ADEs with highly accurate and complicated architectures became adversely expensive from a computational point of view. After training the networks, I generated APs only with the zero and noise feed methods, given that the AAI approach is considerably more time-consuming. For presenting the results, I hand-picked one of the two methods based on the clarity of the patterns and trying to include more diversity.
2.5 ABSTRACTING THE ADVERSARIAL PATTERNS DIRECTLY FROM THE DATASETS
The sensible topology of the AP patterns, paired with the fact that in the classification tasks, deep learning forms in categories, motivated me to investigate the hypothesis that these patterns represent a categorical perception of the classes in the dataset, in which a concept is introduced not only by the instances of what it is, but also equally importantly, through the instances of what it is not. To test this hypothesis, for each class of the MNIST and F-MNIST datasets, I theoretically divided the corresponding database into a positive set (all samples belonging to that class) and a negative set (all samples belonging to the other classes). Next, I computed the average images for both sets and normalized them between 0 and 1. And finally, I calculated the Positive-Negative Contrast image (PNC) by subtracting the negative average image from the positive one.
2.6 CONFIRMING THE RESULTS WITH TWO DUMMY DATASETS
To inspect the categorical learning hypothesis further, I created two dummy datasets each with four classes, each class with one sample; utterly simplistic to secure a perfect learning by networks. The first one is the tiling dataset with four classes of a square patch at each corner of the image, that together they tile the image space. For the tiling dataset, all PNCs precisely match their corresponding class (see Fig. 5). And the second one, the overlapping dataset, with the same four classes of the square patches, but with shifted positions in a way that, if put together, they overlap. Therefore, the PNCs differ from their corresponding class (see Fig. 6). I computed the PNCs and APs for these two datasets using an FC [36, 200, 50, 4] trained with SGD up to the validation accuracy of 100% and loss value of 5.8e-4.
2.7 COMPUTATIONAL RESOURCE
I ran all the computations for this study on an NVIDIA GPU (GeForce® GTX 1070). On this system, generating an AAI with 30 ADEs, with a stop condition of maximum iteration set to 1,000 took 5 minutes on average. For an AP with either zero or noise method, with the same stop condition, the duration plummeted to only 5 seconds.
3 RESULTS
The three methods for obtaining APs produced almost identical patterns, all of which the network classified with the target labels and a remarkably high confidence (see Fig. 1).
These patterns, as soon as they emerge, are indifferent to the parameters of the method used for generating them (see Fig. 2). Furthermore, the patterns persist over a range of network architectures
and training configurations. However, as the complexity of the architectures and optimizers grow, the clarity of the patterns deteriorate. Even so, the less intelligible APs highlight some features of the same patterns, which would be easier to place in the context of the other perturbations for the same class (see Fig. 3 & 4). And finally, as Fig. 3 and 4 show the patterns approximate their corresponding PNCs, with a striking accuracy in simpler models and training plans. The experiment with the tiling and overlapping datasets back this finding (see Fig. 5 & 6).
4 DISCUSSION
When children play with dough, they try to embody the concepts that they are learning day by day. And by observing their lively artifacts and the level of worked details, one can infer about the content and the quality of their learning. This study evolved around the idea that the same will apply to a network, if it tries to remold a controlled input into the concepts it learned from a dataset. That is, considering a gradient-based adversarial attack as a set of instructions asking a network to generate something that it perceives as an instance of a target class, a zero array or a random noise input would function as a lump of playing dough.
Previous studies (Erhan et al., 2009; Simonyan et al., 2013; Nguyen et al., 2015) have exercised powerful methods, like gradient ascent and evolutionary algorithms, for having a visual insight into the content that a trained network learns from a dataset. However, these methods, compared to the adversarial attack practiced in this paper, either are intrusive or unnecessarily coach the networks with some extra information about the target class. While all the information that the gradient-based attack provides the networks with, is the nominal value of the target class. Therefore, any changes made to the input are genuinely devised by the network. For that matter, the AP patterns could be the closest approximation to a network’s perception and this study’s results confirm this.
4.1 NEURAL NETWORKS DO LEARN
The robustness of the patterns to the abstraction parameters tells us whatever these patterns are, they are neither trivial byproducts of the methods, nor random noises varying from one ADE to another. Rather they must be a so-to-speak cognition, emerged from the network-dataset dyad.
One step forward, the patterns’ robustness to the network architectures and training configurations, along with the uniqueness of AP patterns to each class, further validates the assumption that we can reliably take these patterns as the learned content by a trained network. On top of that, this finding justifies the adversarial transferability with explaining APs as responsive to the content of datasets on some level of semantics.
4.2 BUT THEY SEE DIFFERENTLY
Furthermore, inspecting the AP patterns revealed a remarkable resemblance between them and the PNC patterns that makes perfect sense regarding the categorical nature of learning in networks. This has two main implications. First, in sharp contrast with the Potemkin analogy (Goodfellow et al., 2014), ADEs contain patterns which are evidence of the networks’ capacity to learn, and to learn high level semantics. And the adversarial vulnerability is only a side effect of categorical learning. That is, when a network learns about a concept, instead of exclusively focusing on the features of the concept itself, it equally relies on the characteristics of all other existing concepts in the dataset that are not the target concept. Therefore, when the negative set is too sparse (for example, nine digit-shapes in MNIST case) compared to the enormously spacious space that we define for the network (a 784-dimensional space in MNIST case) the positive and negative sets fail to converge on a matching concept. That in turn, causes the discrepancy of perception between the networks and humans; while we constantly forget that we have the privilege to learn in the context of a world flourishing with notions and concepts. In fact, Szegedy et al. (2013), in the very same paper that introduced ADEs for the first time, righteously although partially, addressed this issue by mentioning the high dimensionality of data as a possible reason behind the adversarial vulnerability.
In addition, my experiment with the dummy datasets supports this rationale. The two datasets, which have their minituristic space occupied by comparatively gigantic objects, are almost identical regarding the degree of simplicity. However, while we can generate ADEs for the overlapping dataset, the same attack goes nowhere but to replicate almost the perfect examples of target classes with the tiling dataset.
The second implication of this finding is that on one hand we can estimate PNCs with the AP patterns. On the other hand, it is computationally possible to break down PNCs to the average arrays of all classes of the corresponding dataset. That makes a pipeline from making queries with zero or noise feed to a blackbox model, to the average vectors of the training dataset. Even though the pro-
posed methods in this paper are not capable of delivering elaborate PNCs, the doors of opportunities (or misapplication, to be more accurate) are wide open.
4.3 LIMITATIONS
While the results of this study stand on their own, it is not trivial matter that I only worked with the small-sized FC and CNN models trained on two simple grayscale datasets.
Both MNIST and F-MNIST datasets include samples with center-aligned shapes against a white background which makes it easy to calculate average images and manifest PNCs. But whether more complicated datasets with color images that introduce, for example, instances of a dog all over the input area can yield some sorts of PNCs or not, to validate this we arguably need a different strategy rather than the simple method I used. Moreover, with increased complexity of the network architectures and their close-to-perfect accuracy (especially on the simple datasets), it becomes computationally costly and more time-consuming to generate ADEs or obtain lucid AP patterns with the particular method exercised in this study.
However, my primary efforts which failed to derive supportive results with the ResNet-50 (He et al., 2016) and mobileNetV2 (Sandler et al., 2018) models trained on the ImageNet dataset (Deng et al., 2009), bruise my confidence in the existence of a shared learning strategy among all models and datasets. | 1. What is the focus of the paper regarding adversarial attacks on neural networks?
2. What are the strengths of the proposed approach, particularly in its application to MNIST and F-MNIST datasets?
3. What are the weaknesses of the paper, especially regarding its understanding of adversarial attacks and lack of originality in its methodology?
4. How does the reviewer assess the clarity and conciseness of the paper's writing style?
5. Are there any suggestions for improving the paper's content or research methodology? | Summary Of The Paper
Review | Summary Of The Paper
The paper investigates the problem of the susceptibility of neural networks to adversarial attacks. It considers the hypothesis that "it is not the neural networks failing in learning that causes adversarial vulnerability, but their different perception of the presented data" (quoted verbatim). In other words (as far as I understand), the networks are like a computer which doesn't do what we want it do, but only what we tell it to do. To test it, the Author produces shows that what is considered in the manuscript to be adversarial perturbations are not random noise, but have some meaning and can be correlated with the training dataset. This is done using MNIST and F-MNIST datasets, and simple fully connected and convolutional networks. However, the attempt to reproduce the results on ImageNet using ResNet-50 and mobileNetV2 models failed.
Review
Strengths:
The paper shows that simple ANN models trained on MNIST and F-MNIST simply memorise the average element for each class. However, this is not the case for larger models trained on larger datasets. Some interesting conclusions can be drawn from this about the usefulness of MNIST and F-MNIST (and similar small, simple datasets) for Deep Learning research.
the Author posed an interesting hypothesis (if I understood it correctly) and I would encourage them to investigate it further.
Weaknesses:
The paper attempts to construct adversarial examples by applying a gradient descent attack to either a blank image or random noise. However, adversarial examples are constructed by taking a correctly classified test example as a starting point and adding a small perturbation causing the model to err. That is their accepted definition in literature. The Author should not call the presented image manipulation an "adversarial attack". It would have been different if the patterns obtained the method described in the paper were applied to correctly classified test examples and managed to fool the model, but the paper does not include such an experiment. I suggest performing it since it's relatively cheap and could produce interesting results.
In any case, the method described in the paper cannot explain ALL forms of adversarial attacks, since image classifiers are also susceptible to small translations or rescalings of inputs (Azulay and Weiss, 2018; https://arxiv.org/abs/1805.12177).
If we abandon the adversarial example point of view, the work presented in the paper can be understood as analysing what the neural network has learnt for each class. However, the technique employed is very similar to a saliency map, a known technique for analysing exactly this problem (see e.g. Simonyan, Vedaldi and Zisserman, 2013; https://arxiv.org/abs/1312.6034). The contribution here is not original enough to be published as a new method. It is absolutely fine to use a known method, but it should be acknowledged in the paper.
The prose of the manuscript is in my opinion too flowery, verbose and imprecise for a research paper. I understand style is personal, but there are some conventions in writing scientific literature which one should not deviate from too much or without a good reason. The research hypothesis should be stated more precisely. Phrases like "unveiled a terrifyingly effortless technique for fooling highly accurate neural networks into ludicrous misclassifications of obvious image" should be avoided or used very sparingly. Terms like "deep cognition" should be avoided entirely, unless one is writing a paper about cognition.
In Sec. 2.1, the loss function is defined as "the distance between the network prediction and the target label". But labels are discrete, so how could one run gradient descent on this loss function? It would help if the paper provided a formula for the loss function. |
ICLR | Title
Neural Networks Playing Dough: Investigating Deep Cognition With a Gradient-Based Adversarial Attack
Abstract
Discovering adversarial examples has shaken our trust in the reliability of deep learning. Even though brilliant works have been devoted to understanding and fixing this vulnerability, fundamental questions (e.g. adversarial transferability) remain unanswered. This paper tests the hypothesis that it is not the neural networks failing in learning that causes adversarial vulnerability, but their different perception of the presented data. And therefore, adversarial examples should be semantic-sensitive signals which can provide us with an exceptional opening to understanding the networks’ learning. To investigate this hypothesis, I performed a gradient-based attack on fully connected feed-forward and convolutional neural networks, instructing them to minimally evolve controlled inputs into adversarial examples for all the classes of the MNIST and Fashion-MNIST datasets. Then I abstracted adversarial perturbations from these examples. The perturbations unveiled vivid and recurring visual structures, unique to each class and persistent over parameters of abstraction methods, model architectures, and training configurations. Furthermore, these patterns proved to be explainable and derivable from the corresponding dataset. This finding explains the generalizability of adversarial examples by, semantically, tying them to the datasets. In conclusion, this experiment not only resists interpretation of adversarial examples as deep learning failure but on the contrary, demystifies them in the form of supporting evidence for the authentic learning capacity of networks.
1 INTRODUCTION
Szegedy et al. (2013) introduced the term “ADversarial Examples,” (ADEs) and with that they unveiled a terrifyingly effortless technique for fooling highly accurate neural networks into ludicrous misclassifications of obvious images. However, it took the field four years and a small sticker patch featuring some alien curves with metallic shine (Brown et al., 2017) before realizing how dire the situation could get, in a time when we unlock our phones (Bryliuk & Starovoitov, 2002) and law enforcement identifies suspected criminals (Garvie, 2016) with the very same technology.
ADEs, typically, are copies of the images that a network can classify correctly with a high confidence, plus Adversarial Perturbations (APs) indiscernible to human eyes which misleads the network to classify them incorrectly with an even higher confidence. Inspirational works have contributed to understanding the nature of ADEs. The early speculations were about the high dimensionality of data (Szegedy et al., 2013) and then the linearity of learning in networks (Goodfellow et al., 2014). But these explanations fall short of addressing the phenomenon known as adversarial transferability. This refers to the fact that elusiveness of ADEs created using one subset of a dataset with one network can also fool other models trained on different subsets. Later on, Jo & Bengio (2017) and Ilyas et al. (2019) proposed different accounts that consider, respectively, statistical regularities and non-robust features of datasets as the nature of ADEs. These suggestions offer grounds to understand the transferability. However, in the lack of an agreed-upon explanation, there is still spacious room to explore.
The present study shifts the focus from the ADEs to the APs, and is based on the hypothesis that adversarial vulnerability is not a network weakness, but a consequence of the different ways of per-
ceiving datasets by humans and by the networks. In that respect, the APs represent the networks’ genuine perception of datasets. Therefore, studying APs could be a rare opportunity for understanding the learning process in networks, and it may also answer our lingering questions about the ADEs. To this end, I used a gradient-based attack to produce ADEs for the MNIST (LeCun et al., 2010) and Fashion-MNIST (Xiao et al., 2017) (F-MNIST) datases. Then I abstracted the APs, visualized them, and investigated if there is a way to explain them with the datasets; not with some statistical or non-robust features, but entirely through high level semantics of datasets.
2 METHODS
2.1 GENERATING ADVERSARIAL EXAMPLES
For generating ADEs, I performed a simple gradient-based adversarial attack, introduced by Szegedy et al. (2013). In this method, a trained network runs gradient descent optimization on the input instead of the weights, and treats the weights as non-trainable parameters. The purpose is to alter the input image to what the network would classify with a (commonly nonsensical) target label. To keep the change at a minimum level and limited to the most decisive ones, I defined the loss function as the distance between the network prediction and the target label, omitting any further constraint on the final visual appearance of the input. For having the freedom to explore the models behavior, rather than the traditional “target classification,” I set a disjunctive stop condition for terminating the network optimization loop. The stop condition consists of: a maximum loss threshold, that is the loss function value calculated between the optimized input prediction and the target label; a minimum level of confidence, for the network prediction of the optimized input; and finally a maximum number of iterations counting the rounds of optimization of the input, initially designed to avoid the infinite loop.
2.2 ABSTRACTING THE ADVERSARIAL PERTURBATIONS
Regarding the network acuity with raw data compared to the weak visual perception in humans, APs are invisible to us. Therefore to embolden them, I took one direct and two indirect approaches. In the direct approach (zero feed), I fed the networks with a zero array. When the input is zero, the modified array would be purely AP. Intended on verifying the reproducibility of the patterns obtained with the direct method, I also generated ADEs with normal noise as input. The drawback is that with the noise input, the ADEs will be noisy as well, and the signal-to-noise ratio is so poor that no patterns would be visible. Therefore, in one indirect approach (noise feed), I simply subtracted the initial noise input from the final ADE to harvest the AP. And in the next indirect method (Adversarial Average Image (AAI)), I aimed to accentuate the APs by computing cumulative adversarial Images. In doing so, I generated ADEs in batches of size, generally, 30 and added them up into one AAI, hoping for the noise to cancel itself out while the AP is getting augmented.
To compare the APs produced with these three methods, I trained a Fully Connected feed-forward (FC) network [784, 200, 30, 10] with a Stochastic Gradient Descent (SGD) optimizer on the MNIST dataset, up to the validation accuracy of 0.92 and loss value of 0.27. After some trial and error, I
found that the maximum iterations is the most efficient stop condition since the networks could engineer ADEs for some classes dramatically faster than the others and that delivers the APs still visually incomprehensible; that is especially the case with the AAI method. As a result, for generating ADEs in all three methods, the only stop condition I applied was the maximum iterations, set to 1,000. I then computed APs for all 10 classes, but did not normalize them. The reason being that the results are presented visually, using the pyplot.imshow method of the Matplotlib library (Hunter, 2007) which by default normalizes images to their minimum and maximum values, and for the purpose of this study, it is contrast that matters and not the true values of the shades.
2.3 INVESTIGATING THE PATTERNS’ ROBUSTNESS TO THE ABSTRACTION PARAMETERS
To investigate whether the AP patterns are independent of the abstraction methods, I extracted APs from several ADEs, with a range of different parameters for each approach. For generating ADEs, I used the aforementioned FC architecture and trained it with the same optimizer on the MNIST dataset; this time up to a validation accuracy of 0.94 and the loss value of 0.19. Then I computed APs for the class 5, with the zero feed method with the stop condition set at three levels of confidence: 80, 90, and >99%; with the noise feed method with the stop condition set at three values of loss: 1, 1e-2, and 1e-4; and with the AAI method three times, as well, with averaging over 10, 100, and 1,000 ADEs.
2.4 INVESTIGATING THE PATTERNS’ ROBUSTNESS TO THE MODEL AND TRAINING PARAMETERS
Furthermore, to verify the persistence of the AP patterns over the network architecture and the training configuration, in addition to the mentioned FC, I also tested a Convolutional Neural Network (CNN) [convolutional layers: 128, 256 followed by dense layers: 512, 256, 10]. I trained the two models with six configurations, combined, this time on both MNIST and also F-MNIST datasets (See Table 1). It is to be noted that I did not try to train the networks to their fullest learning capacity. In fact, as the complexity of the architecture and training grew, I reduced the number of training
FC SGD 0.95 0.19 0.85 0.42
epochs due to the fact that generating ADEs with highly accurate and complicated architectures became adversely expensive from a computational point of view. After training the networks, I generated APs only with the zero and noise feed methods, given that the AAI approach is considerably more time-consuming. For presenting the results, I hand-picked one of the two methods based on the clarity of the patterns and trying to include more diversity.
2.5 ABSTRACTING THE ADVERSARIAL PATTERNS DIRECTLY FROM THE DATASETS
The sensible topology of the AP patterns, paired with the fact that in the classification tasks, deep learning forms in categories, motivated me to investigate the hypothesis that these patterns represent a categorical perception of the classes in the dataset, in which a concept is introduced not only by the instances of what it is, but also equally importantly, through the instances of what it is not. To test this hypothesis, for each class of the MNIST and F-MNIST datasets, I theoretically divided the corresponding database into a positive set (all samples belonging to that class) and a negative set (all samples belonging to the other classes). Next, I computed the average images for both sets and normalized them between 0 and 1. And finally, I calculated the Positive-Negative Contrast image (PNC) by subtracting the negative average image from the positive one.
2.6 CONFIRMING THE RESULTS WITH TWO DUMMY DATASETS
To inspect the categorical learning hypothesis further, I created two dummy datasets each with four classes, each class with one sample; utterly simplistic to secure a perfect learning by networks. The first one is the tiling dataset with four classes of a square patch at each corner of the image, that together they tile the image space. For the tiling dataset, all PNCs precisely match their corresponding class (see Fig. 5). And the second one, the overlapping dataset, with the same four classes of the square patches, but with shifted positions in a way that, if put together, they overlap. Therefore, the PNCs differ from their corresponding class (see Fig. 6). I computed the PNCs and APs for these two datasets using an FC [36, 200, 50, 4] trained with SGD up to the validation accuracy of 100% and loss value of 5.8e-4.
2.7 COMPUTATIONAL RESOURCE
I ran all the computations for this study on an NVIDIA GPU (GeForce® GTX 1070). On this system, generating an AAI with 30 ADEs, with a stop condition of maximum iteration set to 1,000 took 5 minutes on average. For an AP with either zero or noise method, with the same stop condition, the duration plummeted to only 5 seconds.
3 RESULTS
The three methods for obtaining APs produced almost identical patterns, all of which the network classified with the target labels and a remarkably high confidence (see Fig. 1).
These patterns, as soon as they emerge, are indifferent to the parameters of the method used for generating them (see Fig. 2). Furthermore, the patterns persist over a range of network architectures
and training configurations. However, as the complexity of the architectures and optimizers grow, the clarity of the patterns deteriorate. Even so, the less intelligible APs highlight some features of the same patterns, which would be easier to place in the context of the other perturbations for the same class (see Fig. 3 & 4). And finally, as Fig. 3 and 4 show the patterns approximate their corresponding PNCs, with a striking accuracy in simpler models and training plans. The experiment with the tiling and overlapping datasets back this finding (see Fig. 5 & 6).
4 DISCUSSION
When children play with dough, they try to embody the concepts that they are learning day by day. And by observing their lively artifacts and the level of worked details, one can infer about the content and the quality of their learning. This study evolved around the idea that the same will apply to a network, if it tries to remold a controlled input into the concepts it learned from a dataset. That is, considering a gradient-based adversarial attack as a set of instructions asking a network to generate something that it perceives as an instance of a target class, a zero array or a random noise input would function as a lump of playing dough.
Previous studies (Erhan et al., 2009; Simonyan et al., 2013; Nguyen et al., 2015) have exercised powerful methods, like gradient ascent and evolutionary algorithms, for having a visual insight into the content that a trained network learns from a dataset. However, these methods, compared to the adversarial attack practiced in this paper, either are intrusive or unnecessarily coach the networks with some extra information about the target class. While all the information that the gradient-based attack provides the networks with, is the nominal value of the target class. Therefore, any changes made to the input are genuinely devised by the network. For that matter, the AP patterns could be the closest approximation to a network’s perception and this study’s results confirm this.
4.1 NEURAL NETWORKS DO LEARN
The robustness of the patterns to the abstraction parameters tells us whatever these patterns are, they are neither trivial byproducts of the methods, nor random noises varying from one ADE to another. Rather they must be a so-to-speak cognition, emerged from the network-dataset dyad.
One step forward, the patterns’ robustness to the network architectures and training configurations, along with the uniqueness of AP patterns to each class, further validates the assumption that we can reliably take these patterns as the learned content by a trained network. On top of that, this finding justifies the adversarial transferability with explaining APs as responsive to the content of datasets on some level of semantics.
4.2 BUT THEY SEE DIFFERENTLY
Furthermore, inspecting the AP patterns revealed a remarkable resemblance between them and the PNC patterns that makes perfect sense regarding the categorical nature of learning in networks. This has two main implications. First, in sharp contrast with the Potemkin analogy (Goodfellow et al., 2014), ADEs contain patterns which are evidence of the networks’ capacity to learn, and to learn high level semantics. And the adversarial vulnerability is only a side effect of categorical learning. That is, when a network learns about a concept, instead of exclusively focusing on the features of the concept itself, it equally relies on the characteristics of all other existing concepts in the dataset that are not the target concept. Therefore, when the negative set is too sparse (for example, nine digit-shapes in MNIST case) compared to the enormously spacious space that we define for the network (a 784-dimensional space in MNIST case) the positive and negative sets fail to converge on a matching concept. That in turn, causes the discrepancy of perception between the networks and humans; while we constantly forget that we have the privilege to learn in the context of a world flourishing with notions and concepts. In fact, Szegedy et al. (2013), in the very same paper that introduced ADEs for the first time, righteously although partially, addressed this issue by mentioning the high dimensionality of data as a possible reason behind the adversarial vulnerability.
In addition, my experiment with the dummy datasets supports this rationale. The two datasets, which have their minituristic space occupied by comparatively gigantic objects, are almost identical regarding the degree of simplicity. However, while we can generate ADEs for the overlapping dataset, the same attack goes nowhere but to replicate almost the perfect examples of target classes with the tiling dataset.
The second implication of this finding is that on one hand we can estimate PNCs with the AP patterns. On the other hand, it is computationally possible to break down PNCs to the average arrays of all classes of the corresponding dataset. That makes a pipeline from making queries with zero or noise feed to a blackbox model, to the average vectors of the training dataset. Even though the pro-
posed methods in this paper are not capable of delivering elaborate PNCs, the doors of opportunities (or misapplication, to be more accurate) are wide open.
4.3 LIMITATIONS
While the results of this study stand on their own, it is not trivial matter that I only worked with the small-sized FC and CNN models trained on two simple grayscale datasets.
Both MNIST and F-MNIST datasets include samples with center-aligned shapes against a white background which makes it easy to calculate average images and manifest PNCs. But whether more complicated datasets with color images that introduce, for example, instances of a dog all over the input area can yield some sorts of PNCs or not, to validate this we arguably need a different strategy rather than the simple method I used. Moreover, with increased complexity of the network architectures and their close-to-perfect accuracy (especially on the simple datasets), it becomes computationally costly and more time-consuming to generate ADEs or obtain lucid AP patterns with the particular method exercised in this study.
However, my primary efforts which failed to derive supportive results with the ResNet-50 (He et al., 2016) and mobileNetV2 (Sandler et al., 2018) models trained on the ImageNet dataset (Deng et al., 2009), bruise my confidence in the existence of a shared learning strategy among all models and datasets. | 1. What are the main contributions and key observations made in the paper regarding adversarial patterns?
2. Are there any limitations or concerns regarding the simplicity of the datasets used in the study?
3. How might the findings of the paper be leveraged to enhance model robustness against adversarial attacks?
4. What changes could be made to improve the organization and structure of the paper, particularly regarding implementation details?
5. Were there any parts of the paper where the authors did not provide sufficient conclusions or discussions? If so, which sections? | Summary Of The Paper
Review | Summary Of The Paper
In this paper, the authors first visualize adversarial patterns generated from three different methods - inputs with zero entries, noise inputs with subtracted perturbations, noise inputs in its average version. The authors then check if the patterns are attack method-agnostic and model-agnostic. Finally, the authors conclude that the patterns approximate their corresponding positive-negative contrast images.
Review
The authors forgot to provide conclusions in many sections. For example, in Sec. 2.3, what is the conclusion there?
The datasets are too simple. I would expect datasets like imagenet or maybe a simpler one - CIFAR-10 (The authors mentioned that they tested on Imagenet but failed. Does that mean the observation and conclusion are only limited to datesets with clear and robust features?)
The work in its current version looks more like a report instead of a conference paper. I would suggest the author improve the paper organization by moving most of their implementing details to the appendix or a single section.
The authors have found trackable patterns existing in adversarial examples. One natural question is that can we leverage these findings to improve model robustness? |
ICLR | Title
Accelerating DNN Training through Selective Localized Learning
Abstract
Training Deep Neural Networks (DNNs) places immense compute requirements on the underlying hardware platforms, expending large amounts of time and energy. We propose LoCal+SGD, a new algorithmic approach to accelerate DNN training by selectively combining localized or Hebbian learning within a Stochastic Gradient Descent (SGD) based training framework. Back-propagation is a computationally expensive process that requires 2 Generalized Matrix Multiply (GEMM) operations to compute the error and weight gradients for each layer. We alleviate this by selectively updating some layers’ weights using localized learning rules that require only 1 GEMM operation per layer. Further, since the weight update is performed during the forward pass itself, the layer activations for the mini-batch do not need to be stored until the backward pass, resulting in a reduced memory footprint. Localized updates can substantially boost training speed, but need to be used selectively and judiciously in order to preserve accuracy and convergence. We address this challenge through the design of a Learning Mode Selection Algorithm, where all layers start with SGD, and as epochs progress, layers gradually transition to localized learning. Specifically, for each epoch, the algorithm identifies a Localized→SGD transition layer, which delineates the network into two regions. Layers before the transition layer use localized updates, while the transition layer and later layers use gradient-based updates. The trend in the weight updates made to the transition layer across epochs is used to determine how the boundary between SGD and localized updates is shifted in future epochs. We also propose a low-cost weak supervision mechanism by controlling the learning rate of localized updates based on the overall training loss. We applied LoCal+SGD to 8 image recognition CNNs (including ResNet50 and MobileNetV2) across 3 datasets (Cifar10, Cifar100 and ImageNet). Our measurements on a Nvidia GTX 1080Ti GPU demonstrate upto 1.5× improvement in end-to-end training time with ∼0.5% loss in Top-1 classification accuracy.
N/A
Training Deep Neural Networks (DNNs) places immense compute requirements on the underlying hardware platforms, expending large amounts of time and energy. We propose LoCal+SGD, a new algorithmic approach to accelerate DNN training by selectively combining localized or Hebbian learning within a Stochastic Gradient Descent (SGD) based training framework. Back-propagation is a computationally expensive process that requires 2 Generalized Matrix Multiply (GEMM) operations to compute the error and weight gradients for each layer. We alleviate this by selectively updating some layers’ weights using localized learning rules that require only 1 GEMM operation per layer. Further, since the weight update is performed during the forward pass itself, the layer activations for the mini-batch do not need to be stored until the backward pass, resulting in a reduced memory footprint. Localized updates can substantially boost training speed, but need to be used selectively and judiciously in order to preserve accuracy and convergence. We address this challenge through the design of a Learning Mode Selection Algorithm, where all layers start with SGD, and as epochs progress, layers gradually transition to localized learning. Specifically, for each epoch, the algorithm identifies a Localized→SGD transition layer, which delineates the network into two regions. Layers before the transition layer use localized updates, while the transition layer and later layers use gradient-based updates. The trend in the weight updates made to the transition layer across epochs is used to determine how the boundary between SGD and localized updates is shifted in future epochs. We also propose a low-cost weak supervision mechanism by controlling the learning rate of localized updates based on the overall training loss. We applied LoCal+SGD to 8 image recognition CNNs (including ResNet50 and MobileNetV2) across 3 datasets (Cifar10, Cifar100 and ImageNet). Our measurements on a Nvidia GTX 1080Ti GPU demonstrate upto 1.5× improvement in end-to-end training time with ∼0.5% loss in Top-1 classification accuracy.
1 INTRODUCTION
Deep Neural Networks (DNNs) have achieved continued success in many application domains involving images (Krizhevsky et al., 2017), videos (Ng et al., 2015), text (Zhou et al., 2015) and natural language (Goldberg & Hirst, 2017). However training state-of-the-art DNN models is computationally quite challenging, often requiring exa-FLOPs of compute as the models are quite complex and need to be trained using large datasets. Despite rapid improvements in the capabilities of GPUs and the advent of specialized accelerators, training large models using current platforms is still quite expensive and often takes days to even weeks. In this work, we aim to reduce the computational complexity of DNN training through a new algorithmic approach called LoCal+SGD1, which alleviates the key performance bottlenecks in Stochastic Gradient Descent (SGD) through selective use of localized or Hebbian learning.
Computational Bottlenecks in DNN Training. DNNs are trained in a supervised manner using gradient-descent based cost minimization techniques such as SGD (Bottou, 2010) or Adam (Kingma & Ba, 2015). The training inputs (typically grouped into minibatches) are iteratively forward propagated (FP ) and back propagated (BP ) through the DNN layers to compute weight updates that push the network parameters in the direction that decreases the overall classification loss.
1In addition to combining localized and SGD based learning, LoCal+SGD is Low-Calorie SGD or SGD with reduced computational requirements
Back-propagation is computationally expensive, accounting for 65-75% of the total training time on GPUs. This is attributed to two key factors: (i) BP involves 2 Generalized Matrix Multiply (GEMM) operations, one to propagate the error across layers and the other to compute the weight gradients, and (ii) when training on distributed systems using data/model parallelism(Dean et al., 2012b; Krizhevsky et al., 2012), aggregation of weight gradients/errors across devices incurs significant communication overhead. Further, BP through auxiliary ops such as batch normalization are also more expensive than FP .
Prior Efforts on Efficient DNN Training. Prior research efforts to improve DNN training time can be grouped into a few directions. One group of efforts enable larger scales of parallelism in DNN training through learning rate tuning (You et al., 2017a; Goyal et al., 2017; You et al., 2017b) and asynchronous weight updates (Dean et al., 2012a). Another class of efforts employ importancebased sample selection during training, wherein ‘easier’ training samples are selectively discarded to improve runtime (Jiang et al., 2019; Zhang et al., 2019). Finally, model quantization (Sun et al., 2019) and pruning (Lym et al., 2019) can lead to significant runtime benefits during training by enabling the use of reduced-bitwidth processing elements.
LoCal+SGD: Combining SGD with Localized Learning. Complementary to the aforementioned efforts, we propose a new approach, LoCal+SGD, to alleviate the performance bottlenecks in DNN training, while preserving model accuracy. Our hybrid approach combines Hebbian or localized learning (Hebb) with SGD by selectively applying it in specific layers and epochs. Localized learning rules (Hebb; Oja, 1982; Zhong, 2005) utilize a single feed-forward weight update to learn the feature representations, eschewing BP . Careful formulation of the localized learning rule can result in ∼2× computation savings compared to SGD and also significantly reduces memory footprint as activations from FP needed not be retained until BP . The reduction in memory footprint can in turn allow increasing the batch size during training, which leads to further runtime savings due to better compute utilization and reduced communication costs. It is worth noting that localized learning has been actively explored in the context of unsupervised learning (Chen et al., 2020; van den Oord et al., 2018; Hénaff et al., 2019). Further, there has been active research efforts on neuro-scientific learning rules (Lee et al., 2015; Nøkland, 2016). Our work is orthogonal to such efforts and represents a new application of localized learning in a fully supervised context, wherein we selectively combine it within an SGD framework to achieve computational savings.
Preserving model accuracy and convergence with LoCal+SGD requires localized updates to be applied judiciously i.e., only to selected layers in certain epochs. We address this challenge through the design of a learning mode selection algorithm. At the start training, the selection algorithm initializes the learning mode of all layers to SGD, and as training progresses determines the layers that transition to localized learning. Specifically, for each epoch, the algorithm identifies a Localized→SGD transition layer, which delineates the network into two regions. Layers before the transition layer use localized updates, while subsequent layers use gradient-based updates. This allows BP to stop at the transition layer, as layers before it have no use for the back-propagated errors. The algorithm takes advantage of the magnitude of the weight updates of the Localized→SGD transition layer in deciding the new position of the boundary every epoch. Further, we provide weak supervision by tweaking the learning rate of locally updated layers based on overall training loss.
Contributions: To the best of our knowledge, LoCal+SGD is the first effort that combines localized learning (an unsupervised learning technique) within a supervised SGD context to reduced computational costs while maintaining classification accuracy. This favorable tradeoff is achieved by LoCal+SGD through a Learning Mode Selection Algorithm that applies localized learning to selected layers and epochs. Further improvement is achieved through the use of weak supervision by modulating the learning rate of locally updated layers based on the overall training loss. Across 8 image recognition CNNs (including ResNet50 and MobileNet) and 3 datasets (Cifar10, Cifar100 and ImageNet), we demonstrate that LoCal+SGD achieves up to 1.5× improvement in training time with ∼0.5% Top-1 accuracy loss on a Nvidia GTX 1080Ti GPU.
2 LoCal+SGD: COMBINING SGD WITH SELECTIVE LOCALIZED LEARNING The key idea in LoCal+SGD is to apply localized learning to selected layers and epochs during DNN training to improve the overall execution time, without incurring loss in accuracy. The following components are critical to the effectiveness of LoCal+SGD:
• Localized Learning Rule Formulation. We formulate a computationally efficient localized learning rule and highlight the clear runtime benefits when compared to SGD. • Learning Mode Selection Algorithm. We propose a learning mode selection algorithm
that chooses between localized learning and SGD-based learning for each layer in every epoch, based on the potential impact on accuracy and computational benefits. • Weak Supervision. We propose a weak supervision technique, which comprises of a low-
cost supervision signal communicated to the localized learning layers in each epoch. The signal modulates the learning rates of these layers based on the rate of change of the overall classification loss.
In the following sub-sections, we describe the salient aspects of these components in greater detail.
2.1 EFFICIENT LOCALIZED LEARNING
Localized learning has been extensively explored in the context of unsupervised learning, demonstrating success on small (<= 3 layer) networks using relatively simpler datasets (e.g. MNIST, Cifar-10) (LeCun & Cortes, 2010; Krizhevsky et al., a)) with an accuracy gap that is yet to be bridged on larger datasets (e.g. ResNet50 or MobileNetV2 on ImageNet (Deng et al., 2009)). First proposed in (Hebb), the key intuition behind localized learning rules is to encourage correlations between neurons that have similar activation patterns. Equation 1 depicts the Hebbian weight update proposed in (Hebb), for a synapse with weight W , connecting a pair of input and output neurons whose activation values are represented by x and y respectively, with η as the learning rate.
4W = η · x · y (1) Considerable research has gone into evolving this equation over the years to improve the performance of localized learning (Oja, 1982; Zhong, 2005). However, many of the proposed rules are computationally complex, or are difficult to parallelize on modern hardware platforms such as GPUs and TPUs. Since our primarily goal is improving DNN training time, we adopt the computationally simple localized learning rule presented in Equation 1.
The learning rule in Equation 1 assumes a distinct synapse between each input and output neuron pair. While its application to fully-connected (fc) layers is straightforward, we need to consider the sharing of weights between neuron pairs in convolutional (conv) layers. For updating a shared weight of a conv layer, we calculate the individual updates due to each pair of pre- and post-synaptic neurons sharing the weight and sum all such updates. This essentially reduces to a convolution operation between the input and output activations of the layer and can be expressed by Equation 3 in Figure 1. For further computational efficiency improvement, unlike Equation 1, we consider the pre-activation-function values of the outputs i.e., zl instead of their post activation value al. Further, we normalize the localized update values as shown in Equation 4 of Figure 1, as it was observed to achieve better convergence in practice.
Overall, we utilize Equations 3 and 4 from Figure 1 to perform the weight updates in all layers that are earlier than the Localized→SGD transition layer during a certain epoch. All other layers continue to be updated using SGD-based BP , expressed by Equations 5-7 in Figure 1. SGD updates are applied to batch-normalization layers present after the Localized→SGD transition layer, and are otherwise skipped. Clearly, Equation 3 has the same computational complexity as Equation 6 of SGD-based BP for conv and fc layers. Thus, from Figure 1, we can directly infer that our localized learning rule will be considerable faster than SGD-based BP . In practice, we measured this improvement to be more than 2× on a NVIDIA GTX 1080Ti GPU for the ImageNet-ResNet50 benchmark, across all conv and fc layers. In addition to the computational complexity, the memory footprint of SGD-based
BP is also higher. This is because DNN software frameworks commonly store all activation values computed during FP to avoid recomputing al−1, the input activations to the layers, used in Equation 6 of SGD-based BP . In contrast, the localized update for a layer can be performed as soon as the FP through the layer is complete. The activation tensor al of layer L can be discarded or over-written as soon as FP proceeds to the next layer in the network, thereby freeing up a significant portion of on-device memory during training. In turn, this can allow larger minibatch sizes to be accommodated on a given hardware platform, when the localized updates are applied on a sufficient number of layers.
2.2 LEARNING MODE SELECTION ALGORITHM
The compute benefits of localized learning come at the cost of potential loss in classification accuracy with respect to SGD training. Thus, we utilize a learning mode selection algorithm to judiciously choose when and where to apply localized learning. The proposed algorithm identifies the learning mode of each layer at every epoch to maximize the runtime benefits, while incurring minimal losses in classification accuracy.
To design an efficient learning mode selection algorithm, we first study the effects of different spatiotemporal patterns of localized learning on the computational efficiency and classification accuracy of a network. We specifically investigate whether localized learning is more suitable for specific layers in the network and specific phases in the training process.
Impact on runtime efficiency: We first analyze the spatial trends, i.e., if locally updating specific layers in the network results in better runtime efficiency. In a particular epoch, if a convolutional layer L, updated with SGD precedes a convolutional layer K, that is updated locally, calculating the SGD-based error gradients of Layer L, i.e. δL, requires error propagation through the locally updated layer K. From a compute efficiency perspective, the benefits of using localized-updates in layer K completely vanish. Thus, it makes sense to partition the network into two regions - a prefix (set of initial layers) that are updated using localized learning, followed by layers that are updated with SGD. SGD-based BP is stopped at the junction of the two regions. Naturally, the compute benefits increase when the number of locally updated layers are higher and thus the boundary i.e., the Localized→SGD transition layer is moved deeper into the network. The impact of different temporal patterns on runtime efficiency is quite straightforward, with higher number of locally updated epochs leading to higher benefits. Further, as the compute complexity of localized updates is constant across different epochs, these benefits are agnostic of which particular epoch involves localized learning.
Impact on accuracy: To analyze the impact on accuracy, we first examine the nature of features learnt by different layers trained by SGD. It is commonly accepted that the initial layers of a network (Agrawal et al., 2014) perform feature extraction, while later layers aid in the classification process. As localized learning demonstrates better performance for feature extraction, applying it more aggressively, i.e for higher number of epochs, in the initial layers has a much smaller impact accuracy. However, for later layers in the network, the number of localized learning epochs should be progressively reduced to preserve accuracy.
Overall, based on the impact of localized learning on both runtime and accuracy, we find that a good learning mode selection algorithm should favor application of localized learning to a contiguous group of initial layers, while ensuring fewer or no localized learning epochs in later layers. We further
impose an additional constraint on top of this spatio-temporal pattern. Specifically, we allow each layer to transition from one learning mode to another at most once during the entire training process. We empirically observe that utilizing SGD as the initial learning mode allows the network to achieve a higher accuracy than utilizing localized learning as the initial mode. SGD essentially provides a better initialization point for all layers, and the subsequent use of localized updates enables the training to converge with good accuracy.
Algorithm 1 Learning Mode Selection Algorithm Input: TE (Index of the transition layer at epoch
E), ek (epochs since last transition), ||4WE || (L2 norm of the weight update of the transition layer at epoch E), K (minimum interval between transitions), tshift (number of layers to shift boundary) Output: TE+1 (Index of the transition layer at epoch E+1)
1: WAvg = 1K ∑e=E−1
e=E−K || 4We|| 2: if || 4WE || <= α ·WAvg and ek>=K 3: TE+1 = TE + tshift 4: ek = 0 5: else 6: TE+1 = TE 7: ek = ek + 1 In accordance with the above considerations, we propose a learning mode selection algorithm, described in Algorithm 1, that identifies the position of the boundary or the Localized→SGD transition layer every epoch. To that end, the algorithm analyzes the L2 norm of the SGD weight updates made to the Localized→SGD transition layer across epochs and determines whether the boundary can be shifted deeper into the network for the next epoch. In order to ensure stability in the training process, the algorithm moves the boundary at most once in every K epochs. It calculates the running average of the norm of the updates, Wavg, over the last K epochs (line 1). The boundary is shifted to the right only if the weight update in epoch E is within a fraction α of Wavg, and K epochs have transpired since the last transition (line 2). The rationale for this criterion is that sustained high magnitudes of weight updates in the transition layer indicate that they are potentially critical to accuracy, in which case the transition layer must continue being updated with SGD. If the criterion is not satisfied, the boundary remains stationary (line 5).
The value of α is set by analyzing the trends in the weight update magnitudes across the training process for different networks. The hyper-parameter tshift is set to the size of a recurring block, such as the residual blocks in ResNets and MobileNetV2. The hyper-parameter K is selected in a manner that ensures that localized updates are never applied beyond some fraction of the initial network layers. We denote this fraction as Lmax, and is set to 0.75 in all our experiments. Equation 2 is used to compute K for a network of L layers and a total training period of Emax epochs.
K = Emax
Lmax ∗ Ltshift (2)
In Figure 3, we plot the progression of the transition layer across the ResNet-34 and -50 benchmarks trained on the ImageNet dataset using LoCal+SGD. Interestingly, the weight update norm metric automatically modulates the rate at which the boundary progresses, as the boundary traverses the deeper layers at a slower rate.
2.3 WEAK SUPERVISION
To further bridge the accuracy gap between our approach and end-to-end SGD training, we introduce weak supervision in the locally updated layers. Unlike the SGD-updated layers, the locally updated layers in our approach cannot take advantage of the information provided by supervision, i.e., the classification error evaluated at the output. We utilize this supervised information through a low-cost weak supervision scheme that consists of a single signal sent to all layers updated locally in a particular epoch, and is derived from the classification loss observed over past few epochs. The weak supervision scheme is described in Algorithm 2.
The key principle behind the weak supervision scheme is to control the learning rates of the locally updated layers based on the rate at which the overall classification loss changes. For example, if the overall classification loss has increased across consecutive epochs, we reverse the direction of the
updates (line 3) in the next epoch. In contrast, the update direction is maintained if the overall loss is decreasing (line 5). We find that this weak supervision provides better accuracy results than other learning rate modulation techniques for the locally updated layers such as Adam or momentum-based updates.
Algorithm 2 Weak Supervision Scheme Input: Li (Overall classification loss at epoch
i), lrL (original learning rate of layer L) Output: WL (Weight update of layer L)
1: 4WL = conv(al−1, zl) 2: if Li−1 < Li 3: WL = WL - lrL · 4WL||4WL|| 4: else 5: WL = WL + lrL · 4WL||4WL|| We would like to highlight that traditional SGD provides fine-grained supervision and involves evaluating the error gradients for every neuron in the network. In contrast, the proposed weak supervision scheme provides coarse-grained supervision by forcing all weights to re-use the same loss information. Overall, our weak supervision scheme is not developed with the intent to compete with SGD updates, but is rather a simple, approximate and low-cost technique that brings the final accuracy of LoCal+SGD at par with end-to-end SGD training performance.
3 EXPERIMENTAL RESULTS
In this section, we present the results of our experiments highlighting the compute benefits achieved by LoCal+SGD. We evaluate the benefits across a suite of 8 image-recognition DNNs across 3 datasets. We consider the ResNet18 (He et al., 2015) and VGG13 (Simonyan & Zisserman, 2015) networks for the Cifar10 (Krizhevsky et al., a) and Cifar100 (Krizhevsky et al., b) datasets; and the ResNet34, ResNet50 (He et al., 2015) and MobileNetV2 (Sandler et al., 2018) networks for the ImageNet dataset (Deng et al., 2009). All experiments are conducted on Nvidia GTX 1080Ti GPUs with the batch size set to 64 per GPU, unless otherwise mentioned. Further experimental methodology details for the baseline and proposed approach are provided in the Appendix.
3.1 SINGLE GPU EXECUTION TIME BENEFITS
ImageNet: Table 1 presents the performance of the baseline (end-to-end SGD training) and the proposed LoCal+SGD algorithm on the ImageNet benchmarks in terms of the Top-1 classification error and runtime observed on a single GPU. For all benchmarks listed here, LoCal+SGD applies localized updates for nearly 50-60% of the layers. As can be seen, LoCal+SGD achieves upto∼1.4× reduction in runtime compared to to the baseline, while sacrificing <0.5% loss in Top-1 accuracy.
Table 1 also compares the performance of LoCal+SGD against existing research efforts designed to improve training efficiency. We perform this analysis against two efforts, namely (i) Training with stochastic depth (Huang et al., 2016) and (ii) Structured Pruning during Training (Lym et al., 2019). Training with stochastic depth, as the name suggests, stochastically bypasses residual blocks by propagating input activations/error gradients via identity or downsampling transformations, resulting in improved training time. However, the approach is targeted towards extremely deep networks and
as seen in Table 1, it incurs a noticeable accuracy loss on networks such as ResNet34, ResNet50 and MobileNetV2. Compared to training with stochastic depth, our proposal clearly achieves better accuracy as well as training runtime benefits. The key principle behind the pruning during training approach is to reduce the size of the weight and activation tensors in a structured manner during training, thereby providing speed-ups on GPU/TPU platforms. However, on complex benchmarks such as ResNet50, such techniques achieve speed-ups at the cost of significant drop in accuracy (∼ 1.5%). To further demonstrate the utility of localized updates in our approach, we consider a third technique, wherein layers selected to be updated locally for a given epoch are instead frozen, i.e., the parameters are held fixed during that epoch. While this achieves better runtime savings, it incurs considerably higher loss (∼1%) in accuracy, further underscoring the benefits of LoCal+SGD. CIFAR-10 and CIFAR-100: Table 2 presents the accuracy and corresponding compute benefits of the baseline and the proposed technique, as well as training with stochastic depth and layer freezing, for the CIFAR-10 and CIFAR-100 datasets. Stochastic depth is applicable only to residual blocks and is hence not considered for the VGG-13 network. Across benchmarks, we observe upto a 1.51× improvement in training runtime. Compared to the ImageNet benchmarks, LoCal+SGD applies localized updates more aggressively in the CIFAR-10 and CIFAR-100 benchmarks i.e., for more layers are updated locally for a higher number of epochs. This leads to the superior compute benefits of the proposed scheme on these benchmarks.
3.2 EXECUTION TIME BENEFITS FOR MULTI-GPU TRAINING
We analyze the memory footprint of the ResNet50 network when trained with LoCal+SGD on the ImageNet dataset. Training first commences with all layers updated with SGD, resulting in a high memory footprint. Due to the 10 GB capacity of the chosen GPU, the mini-batch size is set to 64 per GPU. As the Localized→SGD transition layer progresses across the network, the memory footprint required also gradually reduces across epochs. We take advantage of this reduction in memory footprint in the context of distributed training using 4 GPUs with data parallelism. Specifically, we extract additional runtime benefits by increasing the batch size on each GPU,
which reduces the frequency of gradient aggregation between devices and alleviates the communication overhead. At epoch 33, the memory footprint per GPU reduces to less than 5 GB, allowing training with an increased mini-batch size of 128 per GPU from epoch 33 onwards. The doubling of the batch-size provides an additional 6% runtime improvement, when measured across the entire training period. We note that other training techniques such as training with stochastic depth cannot exploit this feature, due to minimal reduction in memory footprint.
3.3 ABLATION ANALYSIS
As mentioned in Section 2, the hyper-parameters α, tshift and Lmax control the progression of the boundary across the network. Different values of either parameter can result in dif-
ferent learning mode configurations during training, resulting in different points in the computational efficiency vs. accuracy trade-off space. To understand the trade-off space between accuracy and runtime benefits, we now individually study the impact of each parameter.
Sp ee
d -U
p
Loss in Acc(%)→
Speed-Up
Sp ee
d -U
p →
Loss in Acc(%)→
(a) (b)
1
1.1
1.2
1.3
1.4
1.5
1.6
0 0.4 0.8 1.2 1.6 1
1.1
1.2
1.3
1.4
0 0.4 0.8 1.2 1.6
Figure 5: Compute efficiency vs. accuracy trade-off on the ImageNet dataset for a) ResNet50 and b) MobileNetV2
Impact of α : Figure 5 depicts the best compute benefits achieved for different α, for accuracy losses ranging from 0.1%-1.5% for the ResNet50 and MobileNetV2 benchmarks on ImageNet. On the ResNet50 benchmark, even while limiting the loss in accuracy 0.1%, LoCal+SGD achieves 1.1× speedup over traditional SGD. The speedups
increase to 1.38×-1.47× when around 1.5% loss in accuracy is tolerable.
To p
-1 E
rr o
r %
→
To p
-1 E
rr o
r %
→
Lmax % →Tshift % →
Accuracy
(a) (b)
1
1.1
1.2
1.3
1.4
20
25
30
35
1 4 7 10 13
R u
n ti
m e
Sa vi
n gs
→
Sp ee
d -U
p →
Speed-Up
1
1.2
1.4
1.6
22
24
26
28
10 30 50 70 90
curacy is largely stable in the regime of tshift between 5-12%, and begins to experience small degradations again when tshift exceeds 12%. These trends can be explained by analyzing the rate at which the transition layer progresses, and the number of layers transitioning to localized updates in an epoch for different tshift values. Smaller values of tshift (<3%) give rise to low values of k (∼1-2 epochs), the minimum number of epochs that must elapse before the transition layer can shift again. This results in fast progression of the transition layer across the network, leading to rapid changes in the learning mode at the boundary, thereby negatively impacting accuracy. In contrast, while larger tshift values (>12%) encourage slow progression of the boundary, a larger number of layers transition from SGD to localized updates in a single epoch, thereby impacting performance. We note here that in both cases, while α and Lmax can be tuned to control the progression and mitigate the loss in accuracy, the runtime savings is vastly reduced (<10%). Furthermore, for fixed values of Lmax and α, tshift is largely insensitive to runtime benefits, as the average number of layers updated with localized updates remains similar. Hence, for best accuracy and runtime benefits we set tshift in the range of 5-10% for all networks.
Impact of Lmax : Figure 6(b) depicts the impact of Lmax on accuracy for the ResNet50 network. For each Lmax, we identify the α and tshift that provide the best runtime benefits with minimal loss in accuracy (less than 0.5%). As with tshift, we denote Lmax as a percentage of the total network
depth. As seen in the figure, the degradation in accuracy increases slowly for Lmax in the initial layers - it is merely 0.1% at around Lmax = 30%, and increases to 0.4-0.5% for Lmax = 60-70%. However, the accuracy degradation sharply increases beyond 2% once Lmax exceeds 90% of the network depth. Further, runtime benefits generally increase with higher values of Lmax, for fixed tshift and α. Hence, for achieving a good accuracy versus runtime trade-off, we usually set Lmax to 75% for all networks.
4 RELATED WORK
This section discusses related research efforts to the proposed LoCal+ SGD training technique. These efforts can be broadly categorized into two classes. The first class of efforts focus on compute efficient DNN training. All efforts belonging to this class utilize gradient-descent algorithms to train the DNN model. These techniques are largely complementary to LoCal+SGD, as they can potentially be applied to the parts of the DNN model updated with SGD. In Section 3, we demonstrated how LoCal+SGD achieves superior accuracy versus computational efficiency trade-off than some of these efforts. Further, the second class of efforts involve neuro-scientific faithful learning rules, such as feedback alignment based efforts etc (Nøkland, 2016). Our work is orthogonal to such efforts, as we selectively combine localized learning rules with SGD for better computational efficiency.
We elucidate upon the different research efforts in both directions as follows.
Hyper-parameter tuning: Many notable algorithmic efforts are directed towards achieving training efficiency by controlling the hyper-parameters involved in gradient-descent, notably the learning rate. (You et al., 2017a; Akiba et al., 2017; Goyal et al., 2017; You et al., 2017b) propose learning rate tuning algorithms that achieve training in less than an hour with no loss in accuracy, when distributed to over hundreds of CPU/GPU cores.
Model size reduction during training: Model size reduction via pruning and quantization is a popular technique to reduce compute costs during inference. In many of these efforts, a dense or full precision model is re-trained or fine-tuned to obtain a pruned or quantized model. Recently, several efforts have also investigated dynamically pruning (Lym et al., 2019) or quantizing (Sun et al., 2019) a model during training itself. The reduction in model size results in training speed-ups. Taking a slightly different approach (Huang et al., 2016) proposes stochastically dropping residual blocks on extremely deep networks such as ResNet-1202, not only for training runtime benefits but also better accuracies due to improved gradient strength.
Instance importance based training: Recent research efforts have discovered that not all training samples are required for improving loss minimization during SGD training (Jiang et al., 2019; Zhang et al., 2019). That is, a sizable fraction of the samples can be skipped during several epochs, depending on their impact on the classification loss evaluated during FP . This translates to a reduction in mini-batches, providing considerable runtime benefits.
Neuro-scientific learning rules: Back-propagation algorithms utilized in DNN training are not biologically plausible, and do not explain how learning actually happens in the brain. To this end, there have been several efforts that develop biological faithful learning algorithms, and demonstrate considerable success on complex benchmarks including Cifar10 and ImageNet. For example, unlike conventional DNN training, feedback alignmnent algorithms (Nøkland, 2016) tackle the weight transport problem (Liao et al., 2015) by allowing for asymmetry in the weight values during forward and back propagation. Likewise, Target-Propagation (Lee et al., 2015) encourages neural activity to reach desired target activations evaluated during forward propagation itself, instead of utilizing loss gradients.
5 CONCLUSION
In this paper, we introduce a new approach to improve the training efficiency of state-of-the-art DNNs. Specifically, we take advantage of the computationally efficient nature of localized learning rules and selectively update some layers with these rules instead of SGD. We design an intelligent learning mode selection algorithm that determines the update method for the convolutional layers of the network in every epoch while maintaining the accuracy level and extracting maximum benefits. Further, we also implement a low-cost weak supervision scheme that brings the accuracy of the proposed scheme closer to traditional SGD training. Across a benchmark suite of 8 DNNs, we achieve upto 1.5× reduction in training times, as measured on a modern GPU platform.
6 APPENDIX
6.1 EXPERIMENTAL SETUP
This subsection describes the experimental setup used for realizing the baseline and proposed LoCal+SGD training schemes, on the benchmarks specified in Section 3 of the main paper. We conduct our experiments on the complete training and test datasets of each benchmark, using the PyTorch (Paszke et al., 2019) framework.
Baseline: We consider end-to-end SGD training as the baseline in our experiments. The hyperparameters used in SGD training of each of the benchmarks are described below.
ImageNet: For experiments in Section 3.1 we utilize a batch-size of 64 per GPU, for all benchmarks. For the ResNet50 and ResNet34 benchmarks the initial learning rate set to 0.025. The learning rate is decreased by 0.1 every 30 epochs, for a total training duration of 90 epochs, and the weight decay is 4e− 5. The MobileNetV2 benchmark utilizes an initial learning rate of 0.0125. We use a cosine learning rate decay schedule, as in (Li et al., 2019) for 150 epochs. The weight decay is set to 4e− 5. Both benchmarks use an input size of 224*224*3.
For the experiments in Section 3.2, the total batch-size at epoch 1 is 256 (64*4), with the initial learning rate set to 0.1 for the ResNet benchmarks and 0.05 for the MobileNetV2 benchmark. All other parameters remain the same.
Cifar10 and Cifar100: All Cifar10 and Cifar100 experiments utilize a batch-size of 64. The Cifar10 benchmarks are trained with an initial learning rate of 0.05 that is decayed by 0.1 every 10 epochs, across 90 epochs. The initial learning rate of the Cifar100 benchmarks is 0.025 and decayed by 0.5 every 20 epochs, for 150 epochs in total. The weight decay is set to 5e− 4. Both benchmarks utilize an input size of 32*32*3.
LoCal+SGD: In the proposed LoCal+SGD training scheme, the layers updated with SGD are trained with the same hyper-parameters used in the baseline implementation. Further, LoCal+SGD training is conducted using the same number of epochs as baseline SGD training. When a layer is updated locally, the initial learning rate is 0.01 and is decayed by a factor of 2 and 10 every 30 epochs, for the Cifar and the ImageNet benchmarks respectively. In all experiments, the α parameter is set to 0.95. We measure the accuracy and runtime of the proposed scheme for the same number of training epochs as the baseline implementations.
6.2 HYPER-PARAMETER TUNING
To realize LoCal+SGD, we introduce three hyper-parameters: α, tshift and Lmax. tshift controls the number of layers that switch to SGD-based updates every epoch, Lmax is the maximum number of layers that can be updated with localized learning rules, and α determines the position of the
transition layer every epoch by analyzing the gradient information at the boundary between the localized and SGD updates.
To obtain optimized values for these hyper-parameters, we first perform simple grid search using a single network for a particular dataset (for example, we choose the ResNet50 network for ImageNet). We transfer the same hyper-parameter values to other networks for the same dataset. We justify our use of common hyper-parameter values by the following experiment. In Table 4 below, we depict the results on other ImageNet benchmarks (ResNet34 and MobileNetV2) when hyper-parameter tuning is performed for each benchmark individually. As can be seen, the accuracy and runtime benefits are only marginally better than those obtained using a common set of hyper-parameters obtained by tuning on the ResNet50 benchmark. We thus utilize common values for a dataset, effectively rendering them constants. The time taken to obtain these constants is thus a one-time cost, and does not impact the speedups obtained by LoCal+SGD.
6.3 IMPACT OF WEAK SUPERVISION
In Table 5, we highlight the impact of the weak supervision technique on final classification accuracy. As can be seen, across all our benchmarks, the weak supervision technique clearly improves accuracy by nearly 0.06%-0.17%, bringing the final accuracy of LoCal+ SGD closer to baseline SGD.
6.4 ADDITIONAL COMPARATIVE ANALYSIS
In addition to the experiments performed in Section 3 to compare the performance of LoCal+SGD against existing techniques such as pruning during training (Lym et al., 2019) and training with stochastic depth (Huang et al., 2016), we conduct additional experiments to further solidify the superiority of our approach. We elucidate upon these comparisons as follows.
6.4.1 COMPARING LoCal+ SGD AGAINST SGD AT ISO-ACCURACY We compare the proposed LoCal+SGD training strategy against a SGD baseline that is trained with fewer epochs, i.e., the number of epochs required to reach the highest accuracy obtained by LoCal+ SGD across the total training periods listed in Section 6.1. For the ImageNet benchmarks, the runtime improvements are listed in Table 6 below. Clearly, LoCal+SGD continues to achieve significant speed-ups (around 1.25×) compared to the SGD baseline, even for complex benchmarks such as ResNet50 and MobileNetV2.
6.4.2 COMPARING LoCal+SGD AGAINST FREEZING LAYERS DURING TRAINING
In Section 3, we compare LoCal+SGD against a technique, freezing layers during training, wherein instead of updating the layers using localized learning, the weights are held fixed. In this section,
we perform a more thorough comparison of LoCal+SGD against freezing layers during training. Specifically, we perform this comparison at iso-runtime, and analyze the resulting accuracy of either approach. To elaborate, we first identify the LoCal+SGD configuration that can reach the best accuracy within 0.05%, 0.1%, 0.25%, 0.5% and 1% of the baseline SGD accuracy. Then, for the same runtimes taken by each LoCal+SGD configuration, we identify the configuration that provides the best accuracy for the freezing layers approach. Our results for the Cifar10 ResNet18 benchmark can be found in Table 7. LoCal+SGD performs superior to freezing layers during training on 3 out of the 5 configurations studied, i.e., is a superior technique when the loss compared to SGD is allowed to exceed 0.1%.
6.5 ANALYSIS OF STATIC SCHEDULES FOR LEARNING MODE SELECTION
The current LoCal+SGD framework is realized with the help of an automatic learning mode selection algorithm, which determines the position of the transition layer every epoch. Instead of a dynamic data-dependent algorithm, we investigate the benefits of using a static schedule - that is, the position of the transition layer is determined using some pre-defined scheduling function. To this end, we have implemented a simple static schedule that favors aggressive application of the localized learning rule in initial layers, and gradually decreases the number of epochs localized learning is applied in the deeper layers. As shown in Equation 3, we opt for a quadratic scheduling function, as we empirically observe they perform better compared to the linear functions studied. Here N determines the position of the transition layer every epoch, Emax is the maximum number of training epochs, and c1 and c2 are constants obtained using grid search.
N = bmax(0, c1 − c2 · (E − Emax)2)c (3)
We report the results using this static schedule in Table 8 for the ImageNet-ResNet50 and MobileNetV2 benchmarks. Compared to the results reported in Table 1, we find that the static schedule achieves slightly higher runtime benefits, for marginally lower accuracies. However, static schedules suffer from some drawbacks – several static scheduling functions are feasible, e.g. exponential, quadratic, etc., and identifying the best scheduling function for each network requires extensive empirical analysis. The learning mode selection algorithm utilized in the paper helps alleviate this by automatically identifying the position of the transition layer every epoch, leveraging the gradient information at the boundary between localized updates and SGD. | 1. What is the focus and contribution of the paper on accelerating DNN training?
2. What are the strengths of the proposed approach, particularly in terms of reducing computational cost?
3. What are the weaknesses of the paper, especially regarding experiment results and comparisons with other works?
4. Do you have any concerns about the stability and scalability of the proposed method?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Review | Review
Accelerating DNN Training through Selective Localized Learning
In this paper, the authors proposed a new approach by the name of LoCal + SGD (Localized Updates) to replace the traditional Backpropagation method. The key idea is to selectively update some layers’ weights using localized learning rules. For these layers, the computation cost is reduced from two matrix multiply operations to one matrix multiply operation. The authors also proposed the Learning Mode Selection Algorithm to maintain the accuracy and convergence.
The authors provided some experimental results on common deep learning benchmarks such as ImageNet/ResNet and CIFAR/VGG. Overall, the authors reported that this approach can achieve around 1.36x speedup for a 0.4% loss in accuracy for ResNet-50. The authors also reported that they can achieve a higher speed than recent methods such as Structured Pruning and stochastic depth.
This work is a trade-off between computation and accuracy (details in Figure 5). I have some questions for the authors:
(1) How stable is the proposed method (LoCal + SGD)? Does it still work for large-batch optimization and asynchronous training?
(2) What is the overhead of hyper-parameter tuning?
(3) Did the authors use the same number of epochs as the baseline to finish the training?
(4) What is the absolute speed (e.g. in GFlops or TFlops)? Can the proposed method beat a well-optimized NVIDIA implementation?
(5) What is the limit of the proposed method?
(6) Can the Learning Mode Selection Algorithm work with other methods?
Since this work fundamentally changes the way of learning, it is probably necessary to do a convergence analysis for the proposed method. |
ICLR | Title
Accelerating DNN Training through Selective Localized Learning
Abstract
Training Deep Neural Networks (DNNs) places immense compute requirements on the underlying hardware platforms, expending large amounts of time and energy. We propose LoCal+SGD, a new algorithmic approach to accelerate DNN training by selectively combining localized or Hebbian learning within a Stochastic Gradient Descent (SGD) based training framework. Back-propagation is a computationally expensive process that requires 2 Generalized Matrix Multiply (GEMM) operations to compute the error and weight gradients for each layer. We alleviate this by selectively updating some layers’ weights using localized learning rules that require only 1 GEMM operation per layer. Further, since the weight update is performed during the forward pass itself, the layer activations for the mini-batch do not need to be stored until the backward pass, resulting in a reduced memory footprint. Localized updates can substantially boost training speed, but need to be used selectively and judiciously in order to preserve accuracy and convergence. We address this challenge through the design of a Learning Mode Selection Algorithm, where all layers start with SGD, and as epochs progress, layers gradually transition to localized learning. Specifically, for each epoch, the algorithm identifies a Localized→SGD transition layer, which delineates the network into two regions. Layers before the transition layer use localized updates, while the transition layer and later layers use gradient-based updates. The trend in the weight updates made to the transition layer across epochs is used to determine how the boundary between SGD and localized updates is shifted in future epochs. We also propose a low-cost weak supervision mechanism by controlling the learning rate of localized updates based on the overall training loss. We applied LoCal+SGD to 8 image recognition CNNs (including ResNet50 and MobileNetV2) across 3 datasets (Cifar10, Cifar100 and ImageNet). Our measurements on a Nvidia GTX 1080Ti GPU demonstrate upto 1.5× improvement in end-to-end training time with ∼0.5% loss in Top-1 classification accuracy.
N/A
Training Deep Neural Networks (DNNs) places immense compute requirements on the underlying hardware platforms, expending large amounts of time and energy. We propose LoCal+SGD, a new algorithmic approach to accelerate DNN training by selectively combining localized or Hebbian learning within a Stochastic Gradient Descent (SGD) based training framework. Back-propagation is a computationally expensive process that requires 2 Generalized Matrix Multiply (GEMM) operations to compute the error and weight gradients for each layer. We alleviate this by selectively updating some layers’ weights using localized learning rules that require only 1 GEMM operation per layer. Further, since the weight update is performed during the forward pass itself, the layer activations for the mini-batch do not need to be stored until the backward pass, resulting in a reduced memory footprint. Localized updates can substantially boost training speed, but need to be used selectively and judiciously in order to preserve accuracy and convergence. We address this challenge through the design of a Learning Mode Selection Algorithm, where all layers start with SGD, and as epochs progress, layers gradually transition to localized learning. Specifically, for each epoch, the algorithm identifies a Localized→SGD transition layer, which delineates the network into two regions. Layers before the transition layer use localized updates, while the transition layer and later layers use gradient-based updates. The trend in the weight updates made to the transition layer across epochs is used to determine how the boundary between SGD and localized updates is shifted in future epochs. We also propose a low-cost weak supervision mechanism by controlling the learning rate of localized updates based on the overall training loss. We applied LoCal+SGD to 8 image recognition CNNs (including ResNet50 and MobileNetV2) across 3 datasets (Cifar10, Cifar100 and ImageNet). Our measurements on a Nvidia GTX 1080Ti GPU demonstrate upto 1.5× improvement in end-to-end training time with ∼0.5% loss in Top-1 classification accuracy.
1 INTRODUCTION
Deep Neural Networks (DNNs) have achieved continued success in many application domains involving images (Krizhevsky et al., 2017), videos (Ng et al., 2015), text (Zhou et al., 2015) and natural language (Goldberg & Hirst, 2017). However training state-of-the-art DNN models is computationally quite challenging, often requiring exa-FLOPs of compute as the models are quite complex and need to be trained using large datasets. Despite rapid improvements in the capabilities of GPUs and the advent of specialized accelerators, training large models using current platforms is still quite expensive and often takes days to even weeks. In this work, we aim to reduce the computational complexity of DNN training through a new algorithmic approach called LoCal+SGD1, which alleviates the key performance bottlenecks in Stochastic Gradient Descent (SGD) through selective use of localized or Hebbian learning.
Computational Bottlenecks in DNN Training. DNNs are trained in a supervised manner using gradient-descent based cost minimization techniques such as SGD (Bottou, 2010) or Adam (Kingma & Ba, 2015). The training inputs (typically grouped into minibatches) are iteratively forward propagated (FP ) and back propagated (BP ) through the DNN layers to compute weight updates that push the network parameters in the direction that decreases the overall classification loss.
1In addition to combining localized and SGD based learning, LoCal+SGD is Low-Calorie SGD or SGD with reduced computational requirements
Back-propagation is computationally expensive, accounting for 65-75% of the total training time on GPUs. This is attributed to two key factors: (i) BP involves 2 Generalized Matrix Multiply (GEMM) operations, one to propagate the error across layers and the other to compute the weight gradients, and (ii) when training on distributed systems using data/model parallelism(Dean et al., 2012b; Krizhevsky et al., 2012), aggregation of weight gradients/errors across devices incurs significant communication overhead. Further, BP through auxiliary ops such as batch normalization are also more expensive than FP .
Prior Efforts on Efficient DNN Training. Prior research efforts to improve DNN training time can be grouped into a few directions. One group of efforts enable larger scales of parallelism in DNN training through learning rate tuning (You et al., 2017a; Goyal et al., 2017; You et al., 2017b) and asynchronous weight updates (Dean et al., 2012a). Another class of efforts employ importancebased sample selection during training, wherein ‘easier’ training samples are selectively discarded to improve runtime (Jiang et al., 2019; Zhang et al., 2019). Finally, model quantization (Sun et al., 2019) and pruning (Lym et al., 2019) can lead to significant runtime benefits during training by enabling the use of reduced-bitwidth processing elements.
LoCal+SGD: Combining SGD with Localized Learning. Complementary to the aforementioned efforts, we propose a new approach, LoCal+SGD, to alleviate the performance bottlenecks in DNN training, while preserving model accuracy. Our hybrid approach combines Hebbian or localized learning (Hebb) with SGD by selectively applying it in specific layers and epochs. Localized learning rules (Hebb; Oja, 1982; Zhong, 2005) utilize a single feed-forward weight update to learn the feature representations, eschewing BP . Careful formulation of the localized learning rule can result in ∼2× computation savings compared to SGD and also significantly reduces memory footprint as activations from FP needed not be retained until BP . The reduction in memory footprint can in turn allow increasing the batch size during training, which leads to further runtime savings due to better compute utilization and reduced communication costs. It is worth noting that localized learning has been actively explored in the context of unsupervised learning (Chen et al., 2020; van den Oord et al., 2018; Hénaff et al., 2019). Further, there has been active research efforts on neuro-scientific learning rules (Lee et al., 2015; Nøkland, 2016). Our work is orthogonal to such efforts and represents a new application of localized learning in a fully supervised context, wherein we selectively combine it within an SGD framework to achieve computational savings.
Preserving model accuracy and convergence with LoCal+SGD requires localized updates to be applied judiciously i.e., only to selected layers in certain epochs. We address this challenge through the design of a learning mode selection algorithm. At the start training, the selection algorithm initializes the learning mode of all layers to SGD, and as training progresses determines the layers that transition to localized learning. Specifically, for each epoch, the algorithm identifies a Localized→SGD transition layer, which delineates the network into two regions. Layers before the transition layer use localized updates, while subsequent layers use gradient-based updates. This allows BP to stop at the transition layer, as layers before it have no use for the back-propagated errors. The algorithm takes advantage of the magnitude of the weight updates of the Localized→SGD transition layer in deciding the new position of the boundary every epoch. Further, we provide weak supervision by tweaking the learning rate of locally updated layers based on overall training loss.
Contributions: To the best of our knowledge, LoCal+SGD is the first effort that combines localized learning (an unsupervised learning technique) within a supervised SGD context to reduced computational costs while maintaining classification accuracy. This favorable tradeoff is achieved by LoCal+SGD through a Learning Mode Selection Algorithm that applies localized learning to selected layers and epochs. Further improvement is achieved through the use of weak supervision by modulating the learning rate of locally updated layers based on the overall training loss. Across 8 image recognition CNNs (including ResNet50 and MobileNet) and 3 datasets (Cifar10, Cifar100 and ImageNet), we demonstrate that LoCal+SGD achieves up to 1.5× improvement in training time with ∼0.5% Top-1 accuracy loss on a Nvidia GTX 1080Ti GPU.
2 LoCal+SGD: COMBINING SGD WITH SELECTIVE LOCALIZED LEARNING The key idea in LoCal+SGD is to apply localized learning to selected layers and epochs during DNN training to improve the overall execution time, without incurring loss in accuracy. The following components are critical to the effectiveness of LoCal+SGD:
• Localized Learning Rule Formulation. We formulate a computationally efficient localized learning rule and highlight the clear runtime benefits when compared to SGD. • Learning Mode Selection Algorithm. We propose a learning mode selection algorithm
that chooses between localized learning and SGD-based learning for each layer in every epoch, based on the potential impact on accuracy and computational benefits. • Weak Supervision. We propose a weak supervision technique, which comprises of a low-
cost supervision signal communicated to the localized learning layers in each epoch. The signal modulates the learning rates of these layers based on the rate of change of the overall classification loss.
In the following sub-sections, we describe the salient aspects of these components in greater detail.
2.1 EFFICIENT LOCALIZED LEARNING
Localized learning has been extensively explored in the context of unsupervised learning, demonstrating success on small (<= 3 layer) networks using relatively simpler datasets (e.g. MNIST, Cifar-10) (LeCun & Cortes, 2010; Krizhevsky et al., a)) with an accuracy gap that is yet to be bridged on larger datasets (e.g. ResNet50 or MobileNetV2 on ImageNet (Deng et al., 2009)). First proposed in (Hebb), the key intuition behind localized learning rules is to encourage correlations between neurons that have similar activation patterns. Equation 1 depicts the Hebbian weight update proposed in (Hebb), for a synapse with weight W , connecting a pair of input and output neurons whose activation values are represented by x and y respectively, with η as the learning rate.
4W = η · x · y (1) Considerable research has gone into evolving this equation over the years to improve the performance of localized learning (Oja, 1982; Zhong, 2005). However, many of the proposed rules are computationally complex, or are difficult to parallelize on modern hardware platforms such as GPUs and TPUs. Since our primarily goal is improving DNN training time, we adopt the computationally simple localized learning rule presented in Equation 1.
The learning rule in Equation 1 assumes a distinct synapse between each input and output neuron pair. While its application to fully-connected (fc) layers is straightforward, we need to consider the sharing of weights between neuron pairs in convolutional (conv) layers. For updating a shared weight of a conv layer, we calculate the individual updates due to each pair of pre- and post-synaptic neurons sharing the weight and sum all such updates. This essentially reduces to a convolution operation between the input and output activations of the layer and can be expressed by Equation 3 in Figure 1. For further computational efficiency improvement, unlike Equation 1, we consider the pre-activation-function values of the outputs i.e., zl instead of their post activation value al. Further, we normalize the localized update values as shown in Equation 4 of Figure 1, as it was observed to achieve better convergence in practice.
Overall, we utilize Equations 3 and 4 from Figure 1 to perform the weight updates in all layers that are earlier than the Localized→SGD transition layer during a certain epoch. All other layers continue to be updated using SGD-based BP , expressed by Equations 5-7 in Figure 1. SGD updates are applied to batch-normalization layers present after the Localized→SGD transition layer, and are otherwise skipped. Clearly, Equation 3 has the same computational complexity as Equation 6 of SGD-based BP for conv and fc layers. Thus, from Figure 1, we can directly infer that our localized learning rule will be considerable faster than SGD-based BP . In practice, we measured this improvement to be more than 2× on a NVIDIA GTX 1080Ti GPU for the ImageNet-ResNet50 benchmark, across all conv and fc layers. In addition to the computational complexity, the memory footprint of SGD-based
BP is also higher. This is because DNN software frameworks commonly store all activation values computed during FP to avoid recomputing al−1, the input activations to the layers, used in Equation 6 of SGD-based BP . In contrast, the localized update for a layer can be performed as soon as the FP through the layer is complete. The activation tensor al of layer L can be discarded or over-written as soon as FP proceeds to the next layer in the network, thereby freeing up a significant portion of on-device memory during training. In turn, this can allow larger minibatch sizes to be accommodated on a given hardware platform, when the localized updates are applied on a sufficient number of layers.
2.2 LEARNING MODE SELECTION ALGORITHM
The compute benefits of localized learning come at the cost of potential loss in classification accuracy with respect to SGD training. Thus, we utilize a learning mode selection algorithm to judiciously choose when and where to apply localized learning. The proposed algorithm identifies the learning mode of each layer at every epoch to maximize the runtime benefits, while incurring minimal losses in classification accuracy.
To design an efficient learning mode selection algorithm, we first study the effects of different spatiotemporal patterns of localized learning on the computational efficiency and classification accuracy of a network. We specifically investigate whether localized learning is more suitable for specific layers in the network and specific phases in the training process.
Impact on runtime efficiency: We first analyze the spatial trends, i.e., if locally updating specific layers in the network results in better runtime efficiency. In a particular epoch, if a convolutional layer L, updated with SGD precedes a convolutional layer K, that is updated locally, calculating the SGD-based error gradients of Layer L, i.e. δL, requires error propagation through the locally updated layer K. From a compute efficiency perspective, the benefits of using localized-updates in layer K completely vanish. Thus, it makes sense to partition the network into two regions - a prefix (set of initial layers) that are updated using localized learning, followed by layers that are updated with SGD. SGD-based BP is stopped at the junction of the two regions. Naturally, the compute benefits increase when the number of locally updated layers are higher and thus the boundary i.e., the Localized→SGD transition layer is moved deeper into the network. The impact of different temporal patterns on runtime efficiency is quite straightforward, with higher number of locally updated epochs leading to higher benefits. Further, as the compute complexity of localized updates is constant across different epochs, these benefits are agnostic of which particular epoch involves localized learning.
Impact on accuracy: To analyze the impact on accuracy, we first examine the nature of features learnt by different layers trained by SGD. It is commonly accepted that the initial layers of a network (Agrawal et al., 2014) perform feature extraction, while later layers aid in the classification process. As localized learning demonstrates better performance for feature extraction, applying it more aggressively, i.e for higher number of epochs, in the initial layers has a much smaller impact accuracy. However, for later layers in the network, the number of localized learning epochs should be progressively reduced to preserve accuracy.
Overall, based on the impact of localized learning on both runtime and accuracy, we find that a good learning mode selection algorithm should favor application of localized learning to a contiguous group of initial layers, while ensuring fewer or no localized learning epochs in later layers. We further
impose an additional constraint on top of this spatio-temporal pattern. Specifically, we allow each layer to transition from one learning mode to another at most once during the entire training process. We empirically observe that utilizing SGD as the initial learning mode allows the network to achieve a higher accuracy than utilizing localized learning as the initial mode. SGD essentially provides a better initialization point for all layers, and the subsequent use of localized updates enables the training to converge with good accuracy.
Algorithm 1 Learning Mode Selection Algorithm Input: TE (Index of the transition layer at epoch
E), ek (epochs since last transition), ||4WE || (L2 norm of the weight update of the transition layer at epoch E), K (minimum interval between transitions), tshift (number of layers to shift boundary) Output: TE+1 (Index of the transition layer at epoch E+1)
1: WAvg = 1K ∑e=E−1
e=E−K || 4We|| 2: if || 4WE || <= α ·WAvg and ek>=K 3: TE+1 = TE + tshift 4: ek = 0 5: else 6: TE+1 = TE 7: ek = ek + 1 In accordance with the above considerations, we propose a learning mode selection algorithm, described in Algorithm 1, that identifies the position of the boundary or the Localized→SGD transition layer every epoch. To that end, the algorithm analyzes the L2 norm of the SGD weight updates made to the Localized→SGD transition layer across epochs and determines whether the boundary can be shifted deeper into the network for the next epoch. In order to ensure stability in the training process, the algorithm moves the boundary at most once in every K epochs. It calculates the running average of the norm of the updates, Wavg, over the last K epochs (line 1). The boundary is shifted to the right only if the weight update in epoch E is within a fraction α of Wavg, and K epochs have transpired since the last transition (line 2). The rationale for this criterion is that sustained high magnitudes of weight updates in the transition layer indicate that they are potentially critical to accuracy, in which case the transition layer must continue being updated with SGD. If the criterion is not satisfied, the boundary remains stationary (line 5).
The value of α is set by analyzing the trends in the weight update magnitudes across the training process for different networks. The hyper-parameter tshift is set to the size of a recurring block, such as the residual blocks in ResNets and MobileNetV2. The hyper-parameter K is selected in a manner that ensures that localized updates are never applied beyond some fraction of the initial network layers. We denote this fraction as Lmax, and is set to 0.75 in all our experiments. Equation 2 is used to compute K for a network of L layers and a total training period of Emax epochs.
K = Emax
Lmax ∗ Ltshift (2)
In Figure 3, we plot the progression of the transition layer across the ResNet-34 and -50 benchmarks trained on the ImageNet dataset using LoCal+SGD. Interestingly, the weight update norm metric automatically modulates the rate at which the boundary progresses, as the boundary traverses the deeper layers at a slower rate.
2.3 WEAK SUPERVISION
To further bridge the accuracy gap between our approach and end-to-end SGD training, we introduce weak supervision in the locally updated layers. Unlike the SGD-updated layers, the locally updated layers in our approach cannot take advantage of the information provided by supervision, i.e., the classification error evaluated at the output. We utilize this supervised information through a low-cost weak supervision scheme that consists of a single signal sent to all layers updated locally in a particular epoch, and is derived from the classification loss observed over past few epochs. The weak supervision scheme is described in Algorithm 2.
The key principle behind the weak supervision scheme is to control the learning rates of the locally updated layers based on the rate at which the overall classification loss changes. For example, if the overall classification loss has increased across consecutive epochs, we reverse the direction of the
updates (line 3) in the next epoch. In contrast, the update direction is maintained if the overall loss is decreasing (line 5). We find that this weak supervision provides better accuracy results than other learning rate modulation techniques for the locally updated layers such as Adam or momentum-based updates.
Algorithm 2 Weak Supervision Scheme Input: Li (Overall classification loss at epoch
i), lrL (original learning rate of layer L) Output: WL (Weight update of layer L)
1: 4WL = conv(al−1, zl) 2: if Li−1 < Li 3: WL = WL - lrL · 4WL||4WL|| 4: else 5: WL = WL + lrL · 4WL||4WL|| We would like to highlight that traditional SGD provides fine-grained supervision and involves evaluating the error gradients for every neuron in the network. In contrast, the proposed weak supervision scheme provides coarse-grained supervision by forcing all weights to re-use the same loss information. Overall, our weak supervision scheme is not developed with the intent to compete with SGD updates, but is rather a simple, approximate and low-cost technique that brings the final accuracy of LoCal+SGD at par with end-to-end SGD training performance.
3 EXPERIMENTAL RESULTS
In this section, we present the results of our experiments highlighting the compute benefits achieved by LoCal+SGD. We evaluate the benefits across a suite of 8 image-recognition DNNs across 3 datasets. We consider the ResNet18 (He et al., 2015) and VGG13 (Simonyan & Zisserman, 2015) networks for the Cifar10 (Krizhevsky et al., a) and Cifar100 (Krizhevsky et al., b) datasets; and the ResNet34, ResNet50 (He et al., 2015) and MobileNetV2 (Sandler et al., 2018) networks for the ImageNet dataset (Deng et al., 2009). All experiments are conducted on Nvidia GTX 1080Ti GPUs with the batch size set to 64 per GPU, unless otherwise mentioned. Further experimental methodology details for the baseline and proposed approach are provided in the Appendix.
3.1 SINGLE GPU EXECUTION TIME BENEFITS
ImageNet: Table 1 presents the performance of the baseline (end-to-end SGD training) and the proposed LoCal+SGD algorithm on the ImageNet benchmarks in terms of the Top-1 classification error and runtime observed on a single GPU. For all benchmarks listed here, LoCal+SGD applies localized updates for nearly 50-60% of the layers. As can be seen, LoCal+SGD achieves upto∼1.4× reduction in runtime compared to to the baseline, while sacrificing <0.5% loss in Top-1 accuracy.
Table 1 also compares the performance of LoCal+SGD against existing research efforts designed to improve training efficiency. We perform this analysis against two efforts, namely (i) Training with stochastic depth (Huang et al., 2016) and (ii) Structured Pruning during Training (Lym et al., 2019). Training with stochastic depth, as the name suggests, stochastically bypasses residual blocks by propagating input activations/error gradients via identity or downsampling transformations, resulting in improved training time. However, the approach is targeted towards extremely deep networks and
as seen in Table 1, it incurs a noticeable accuracy loss on networks such as ResNet34, ResNet50 and MobileNetV2. Compared to training with stochastic depth, our proposal clearly achieves better accuracy as well as training runtime benefits. The key principle behind the pruning during training approach is to reduce the size of the weight and activation tensors in a structured manner during training, thereby providing speed-ups on GPU/TPU platforms. However, on complex benchmarks such as ResNet50, such techniques achieve speed-ups at the cost of significant drop in accuracy (∼ 1.5%). To further demonstrate the utility of localized updates in our approach, we consider a third technique, wherein layers selected to be updated locally for a given epoch are instead frozen, i.e., the parameters are held fixed during that epoch. While this achieves better runtime savings, it incurs considerably higher loss (∼1%) in accuracy, further underscoring the benefits of LoCal+SGD. CIFAR-10 and CIFAR-100: Table 2 presents the accuracy and corresponding compute benefits of the baseline and the proposed technique, as well as training with stochastic depth and layer freezing, for the CIFAR-10 and CIFAR-100 datasets. Stochastic depth is applicable only to residual blocks and is hence not considered for the VGG-13 network. Across benchmarks, we observe upto a 1.51× improvement in training runtime. Compared to the ImageNet benchmarks, LoCal+SGD applies localized updates more aggressively in the CIFAR-10 and CIFAR-100 benchmarks i.e., for more layers are updated locally for a higher number of epochs. This leads to the superior compute benefits of the proposed scheme on these benchmarks.
3.2 EXECUTION TIME BENEFITS FOR MULTI-GPU TRAINING
We analyze the memory footprint of the ResNet50 network when trained with LoCal+SGD on the ImageNet dataset. Training first commences with all layers updated with SGD, resulting in a high memory footprint. Due to the 10 GB capacity of the chosen GPU, the mini-batch size is set to 64 per GPU. As the Localized→SGD transition layer progresses across the network, the memory footprint required also gradually reduces across epochs. We take advantage of this reduction in memory footprint in the context of distributed training using 4 GPUs with data parallelism. Specifically, we extract additional runtime benefits by increasing the batch size on each GPU,
which reduces the frequency of gradient aggregation between devices and alleviates the communication overhead. At epoch 33, the memory footprint per GPU reduces to less than 5 GB, allowing training with an increased mini-batch size of 128 per GPU from epoch 33 onwards. The doubling of the batch-size provides an additional 6% runtime improvement, when measured across the entire training period. We note that other training techniques such as training with stochastic depth cannot exploit this feature, due to minimal reduction in memory footprint.
3.3 ABLATION ANALYSIS
As mentioned in Section 2, the hyper-parameters α, tshift and Lmax control the progression of the boundary across the network. Different values of either parameter can result in dif-
ferent learning mode configurations during training, resulting in different points in the computational efficiency vs. accuracy trade-off space. To understand the trade-off space between accuracy and runtime benefits, we now individually study the impact of each parameter.
Sp ee
d -U
p
Loss in Acc(%)→
Speed-Up
Sp ee
d -U
p →
Loss in Acc(%)→
(a) (b)
1
1.1
1.2
1.3
1.4
1.5
1.6
0 0.4 0.8 1.2 1.6 1
1.1
1.2
1.3
1.4
0 0.4 0.8 1.2 1.6
Figure 5: Compute efficiency vs. accuracy trade-off on the ImageNet dataset for a) ResNet50 and b) MobileNetV2
Impact of α : Figure 5 depicts the best compute benefits achieved for different α, for accuracy losses ranging from 0.1%-1.5% for the ResNet50 and MobileNetV2 benchmarks on ImageNet. On the ResNet50 benchmark, even while limiting the loss in accuracy 0.1%, LoCal+SGD achieves 1.1× speedup over traditional SGD. The speedups
increase to 1.38×-1.47× when around 1.5% loss in accuracy is tolerable.
To p
-1 E
rr o
r %
→
To p
-1 E
rr o
r %
→
Lmax % →Tshift % →
Accuracy
(a) (b)
1
1.1
1.2
1.3
1.4
20
25
30
35
1 4 7 10 13
R u
n ti
m e
Sa vi
n gs
→
Sp ee
d -U
p →
Speed-Up
1
1.2
1.4
1.6
22
24
26
28
10 30 50 70 90
curacy is largely stable in the regime of tshift between 5-12%, and begins to experience small degradations again when tshift exceeds 12%. These trends can be explained by analyzing the rate at which the transition layer progresses, and the number of layers transitioning to localized updates in an epoch for different tshift values. Smaller values of tshift (<3%) give rise to low values of k (∼1-2 epochs), the minimum number of epochs that must elapse before the transition layer can shift again. This results in fast progression of the transition layer across the network, leading to rapid changes in the learning mode at the boundary, thereby negatively impacting accuracy. In contrast, while larger tshift values (>12%) encourage slow progression of the boundary, a larger number of layers transition from SGD to localized updates in a single epoch, thereby impacting performance. We note here that in both cases, while α and Lmax can be tuned to control the progression and mitigate the loss in accuracy, the runtime savings is vastly reduced (<10%). Furthermore, for fixed values of Lmax and α, tshift is largely insensitive to runtime benefits, as the average number of layers updated with localized updates remains similar. Hence, for best accuracy and runtime benefits we set tshift in the range of 5-10% for all networks.
Impact of Lmax : Figure 6(b) depicts the impact of Lmax on accuracy for the ResNet50 network. For each Lmax, we identify the α and tshift that provide the best runtime benefits with minimal loss in accuracy (less than 0.5%). As with tshift, we denote Lmax as a percentage of the total network
depth. As seen in the figure, the degradation in accuracy increases slowly for Lmax in the initial layers - it is merely 0.1% at around Lmax = 30%, and increases to 0.4-0.5% for Lmax = 60-70%. However, the accuracy degradation sharply increases beyond 2% once Lmax exceeds 90% of the network depth. Further, runtime benefits generally increase with higher values of Lmax, for fixed tshift and α. Hence, for achieving a good accuracy versus runtime trade-off, we usually set Lmax to 75% for all networks.
4 RELATED WORK
This section discusses related research efforts to the proposed LoCal+ SGD training technique. These efforts can be broadly categorized into two classes. The first class of efforts focus on compute efficient DNN training. All efforts belonging to this class utilize gradient-descent algorithms to train the DNN model. These techniques are largely complementary to LoCal+SGD, as they can potentially be applied to the parts of the DNN model updated with SGD. In Section 3, we demonstrated how LoCal+SGD achieves superior accuracy versus computational efficiency trade-off than some of these efforts. Further, the second class of efforts involve neuro-scientific faithful learning rules, such as feedback alignment based efforts etc (Nøkland, 2016). Our work is orthogonal to such efforts, as we selectively combine localized learning rules with SGD for better computational efficiency.
We elucidate upon the different research efforts in both directions as follows.
Hyper-parameter tuning: Many notable algorithmic efforts are directed towards achieving training efficiency by controlling the hyper-parameters involved in gradient-descent, notably the learning rate. (You et al., 2017a; Akiba et al., 2017; Goyal et al., 2017; You et al., 2017b) propose learning rate tuning algorithms that achieve training in less than an hour with no loss in accuracy, when distributed to over hundreds of CPU/GPU cores.
Model size reduction during training: Model size reduction via pruning and quantization is a popular technique to reduce compute costs during inference. In many of these efforts, a dense or full precision model is re-trained or fine-tuned to obtain a pruned or quantized model. Recently, several efforts have also investigated dynamically pruning (Lym et al., 2019) or quantizing (Sun et al., 2019) a model during training itself. The reduction in model size results in training speed-ups. Taking a slightly different approach (Huang et al., 2016) proposes stochastically dropping residual blocks on extremely deep networks such as ResNet-1202, not only for training runtime benefits but also better accuracies due to improved gradient strength.
Instance importance based training: Recent research efforts have discovered that not all training samples are required for improving loss minimization during SGD training (Jiang et al., 2019; Zhang et al., 2019). That is, a sizable fraction of the samples can be skipped during several epochs, depending on their impact on the classification loss evaluated during FP . This translates to a reduction in mini-batches, providing considerable runtime benefits.
Neuro-scientific learning rules: Back-propagation algorithms utilized in DNN training are not biologically plausible, and do not explain how learning actually happens in the brain. To this end, there have been several efforts that develop biological faithful learning algorithms, and demonstrate considerable success on complex benchmarks including Cifar10 and ImageNet. For example, unlike conventional DNN training, feedback alignmnent algorithms (Nøkland, 2016) tackle the weight transport problem (Liao et al., 2015) by allowing for asymmetry in the weight values during forward and back propagation. Likewise, Target-Propagation (Lee et al., 2015) encourages neural activity to reach desired target activations evaluated during forward propagation itself, instead of utilizing loss gradients.
5 CONCLUSION
In this paper, we introduce a new approach to improve the training efficiency of state-of-the-art DNNs. Specifically, we take advantage of the computationally efficient nature of localized learning rules and selectively update some layers with these rules instead of SGD. We design an intelligent learning mode selection algorithm that determines the update method for the convolutional layers of the network in every epoch while maintaining the accuracy level and extracting maximum benefits. Further, we also implement a low-cost weak supervision scheme that brings the accuracy of the proposed scheme closer to traditional SGD training. Across a benchmark suite of 8 DNNs, we achieve upto 1.5× reduction in training times, as measured on a modern GPU platform.
6 APPENDIX
6.1 EXPERIMENTAL SETUP
This subsection describes the experimental setup used for realizing the baseline and proposed LoCal+SGD training schemes, on the benchmarks specified in Section 3 of the main paper. We conduct our experiments on the complete training and test datasets of each benchmark, using the PyTorch (Paszke et al., 2019) framework.
Baseline: We consider end-to-end SGD training as the baseline in our experiments. The hyperparameters used in SGD training of each of the benchmarks are described below.
ImageNet: For experiments in Section 3.1 we utilize a batch-size of 64 per GPU, for all benchmarks. For the ResNet50 and ResNet34 benchmarks the initial learning rate set to 0.025. The learning rate is decreased by 0.1 every 30 epochs, for a total training duration of 90 epochs, and the weight decay is 4e− 5. The MobileNetV2 benchmark utilizes an initial learning rate of 0.0125. We use a cosine learning rate decay schedule, as in (Li et al., 2019) for 150 epochs. The weight decay is set to 4e− 5. Both benchmarks use an input size of 224*224*3.
For the experiments in Section 3.2, the total batch-size at epoch 1 is 256 (64*4), with the initial learning rate set to 0.1 for the ResNet benchmarks and 0.05 for the MobileNetV2 benchmark. All other parameters remain the same.
Cifar10 and Cifar100: All Cifar10 and Cifar100 experiments utilize a batch-size of 64. The Cifar10 benchmarks are trained with an initial learning rate of 0.05 that is decayed by 0.1 every 10 epochs, across 90 epochs. The initial learning rate of the Cifar100 benchmarks is 0.025 and decayed by 0.5 every 20 epochs, for 150 epochs in total. The weight decay is set to 5e− 4. Both benchmarks utilize an input size of 32*32*3.
LoCal+SGD: In the proposed LoCal+SGD training scheme, the layers updated with SGD are trained with the same hyper-parameters used in the baseline implementation. Further, LoCal+SGD training is conducted using the same number of epochs as baseline SGD training. When a layer is updated locally, the initial learning rate is 0.01 and is decayed by a factor of 2 and 10 every 30 epochs, for the Cifar and the ImageNet benchmarks respectively. In all experiments, the α parameter is set to 0.95. We measure the accuracy and runtime of the proposed scheme for the same number of training epochs as the baseline implementations.
6.2 HYPER-PARAMETER TUNING
To realize LoCal+SGD, we introduce three hyper-parameters: α, tshift and Lmax. tshift controls the number of layers that switch to SGD-based updates every epoch, Lmax is the maximum number of layers that can be updated with localized learning rules, and α determines the position of the
transition layer every epoch by analyzing the gradient information at the boundary between the localized and SGD updates.
To obtain optimized values for these hyper-parameters, we first perform simple grid search using a single network for a particular dataset (for example, we choose the ResNet50 network for ImageNet). We transfer the same hyper-parameter values to other networks for the same dataset. We justify our use of common hyper-parameter values by the following experiment. In Table 4 below, we depict the results on other ImageNet benchmarks (ResNet34 and MobileNetV2) when hyper-parameter tuning is performed for each benchmark individually. As can be seen, the accuracy and runtime benefits are only marginally better than those obtained using a common set of hyper-parameters obtained by tuning on the ResNet50 benchmark. We thus utilize common values for a dataset, effectively rendering them constants. The time taken to obtain these constants is thus a one-time cost, and does not impact the speedups obtained by LoCal+SGD.
6.3 IMPACT OF WEAK SUPERVISION
In Table 5, we highlight the impact of the weak supervision technique on final classification accuracy. As can be seen, across all our benchmarks, the weak supervision technique clearly improves accuracy by nearly 0.06%-0.17%, bringing the final accuracy of LoCal+ SGD closer to baseline SGD.
6.4 ADDITIONAL COMPARATIVE ANALYSIS
In addition to the experiments performed in Section 3 to compare the performance of LoCal+SGD against existing techniques such as pruning during training (Lym et al., 2019) and training with stochastic depth (Huang et al., 2016), we conduct additional experiments to further solidify the superiority of our approach. We elucidate upon these comparisons as follows.
6.4.1 COMPARING LoCal+ SGD AGAINST SGD AT ISO-ACCURACY We compare the proposed LoCal+SGD training strategy against a SGD baseline that is trained with fewer epochs, i.e., the number of epochs required to reach the highest accuracy obtained by LoCal+ SGD across the total training periods listed in Section 6.1. For the ImageNet benchmarks, the runtime improvements are listed in Table 6 below. Clearly, LoCal+SGD continues to achieve significant speed-ups (around 1.25×) compared to the SGD baseline, even for complex benchmarks such as ResNet50 and MobileNetV2.
6.4.2 COMPARING LoCal+SGD AGAINST FREEZING LAYERS DURING TRAINING
In Section 3, we compare LoCal+SGD against a technique, freezing layers during training, wherein instead of updating the layers using localized learning, the weights are held fixed. In this section,
we perform a more thorough comparison of LoCal+SGD against freezing layers during training. Specifically, we perform this comparison at iso-runtime, and analyze the resulting accuracy of either approach. To elaborate, we first identify the LoCal+SGD configuration that can reach the best accuracy within 0.05%, 0.1%, 0.25%, 0.5% and 1% of the baseline SGD accuracy. Then, for the same runtimes taken by each LoCal+SGD configuration, we identify the configuration that provides the best accuracy for the freezing layers approach. Our results for the Cifar10 ResNet18 benchmark can be found in Table 7. LoCal+SGD performs superior to freezing layers during training on 3 out of the 5 configurations studied, i.e., is a superior technique when the loss compared to SGD is allowed to exceed 0.1%.
6.5 ANALYSIS OF STATIC SCHEDULES FOR LEARNING MODE SELECTION
The current LoCal+SGD framework is realized with the help of an automatic learning mode selection algorithm, which determines the position of the transition layer every epoch. Instead of a dynamic data-dependent algorithm, we investigate the benefits of using a static schedule - that is, the position of the transition layer is determined using some pre-defined scheduling function. To this end, we have implemented a simple static schedule that favors aggressive application of the localized learning rule in initial layers, and gradually decreases the number of epochs localized learning is applied in the deeper layers. As shown in Equation 3, we opt for a quadratic scheduling function, as we empirically observe they perform better compared to the linear functions studied. Here N determines the position of the transition layer every epoch, Emax is the maximum number of training epochs, and c1 and c2 are constants obtained using grid search.
N = bmax(0, c1 − c2 · (E − Emax)2)c (3)
We report the results using this static schedule in Table 8 for the ImageNet-ResNet50 and MobileNetV2 benchmarks. Compared to the results reported in Table 1, we find that the static schedule achieves slightly higher runtime benefits, for marginally lower accuracies. However, static schedules suffer from some drawbacks – several static scheduling functions are feasible, e.g. exponential, quadratic, etc., and identifying the best scheduling function for each network requires extensive empirical analysis. The learning mode selection algorithm utilized in the paper helps alleviate this by automatically identifying the position of the transition layer every epoch, leveraging the gradient information at the boundary between localized updates and SGD. | 1. What is the focus and contribution of the paper regarding run-time improvements?
2. What are the strengths of the proposed approach, particularly its simplicity and convergence results?
3. What are the weaknesses of the paper, such as hyperparameter complexity and lack of transferability comments?
4. How could the authors improve their argument to practitioners by considering time-to-train and total FLOPS needed to hit the target accuracy?
5. Are there any concerns regarding the combination of techniques introduced without ablation studies to measure their effectiveness? | Review | Review
This paper proposes a combination of SGD with selective application of a non-backprop learning rule (Hebbian). The two learning rules are not applied together, but rather a boundary is determined where layers prior use SGD, and the ones after use the Hebbian approach. A selection algorithm dynamically adjusts the boundary over training. For accuracy reasons, they include weak supervision by using the overall classification loss to control the sign of the update.
From computational efficiency perspective, the contributions reduce the need for a backprop calculation, and also leads to a smaller memory footprint, since the activation values need not be stored. On ImageNet benchmark models, they show <0.5% Top1 drop in exchange for ~1.3x runtime speed compared to vanilla SGD.
Strengths
Focus on run-time improvements brings practical significant to their proposed method
Algorithm is relatively simple to implement
Convergence results only show a small degradation from SOTA
The tuning results for alpha (Figure 5) are useful for practitioners that need to balance accuracy and compute
Weaknesses
The 'meta' boundary selection and weak supervision approaches add additional hyperparameter complexity to the tuning process. These are also empirically but not theoretically motivated, and unclear if they generalize to other domains. I understand that non-classification models are out of scope for this paper, but his paper's impact would be improved by some comment on transferability. For example, U-net models have long range skip connections that span the model.
While the focus on runtime is welcome, what is relevant to the practitioner is time-to-train to a particular accuracy target, similar to the metric adopted by MLPerf. Since this method does introduce an accuracy degradation (e.g. for RN-34, 27.04% versus 26.6%; Table 1), the more fair comparison would account for the fewer epochs the baseline SGD needs to hit 26.6%. To make a more convincing argument to practitioners, I would compare either the wall-clock time, or the total FLOPS needed, to hit the target accuracy.
Several techniques are introduced without ablations to measure their effectiveness and justify the added complexity. This is particularly important as these techniques add additional burden on the practitioner in terms of tuning the new hyperparameters.
The work combines existing learning rules (SGD, Hebbian) with some novelty in how they are employed, and with a weak supervisory signal, to achieve reasonable results. These contributions were not foundational improvements, so the paper's main merit is in the potential practical impacts of this method. The significance to practitioners however, is greatly reduced by the weaknesses described above. |
ICLR | Title
Accelerating DNN Training through Selective Localized Learning
Abstract
Training Deep Neural Networks (DNNs) places immense compute requirements on the underlying hardware platforms, expending large amounts of time and energy. We propose LoCal+SGD, a new algorithmic approach to accelerate DNN training by selectively combining localized or Hebbian learning within a Stochastic Gradient Descent (SGD) based training framework. Back-propagation is a computationally expensive process that requires 2 Generalized Matrix Multiply (GEMM) operations to compute the error and weight gradients for each layer. We alleviate this by selectively updating some layers’ weights using localized learning rules that require only 1 GEMM operation per layer. Further, since the weight update is performed during the forward pass itself, the layer activations for the mini-batch do not need to be stored until the backward pass, resulting in a reduced memory footprint. Localized updates can substantially boost training speed, but need to be used selectively and judiciously in order to preserve accuracy and convergence. We address this challenge through the design of a Learning Mode Selection Algorithm, where all layers start with SGD, and as epochs progress, layers gradually transition to localized learning. Specifically, for each epoch, the algorithm identifies a Localized→SGD transition layer, which delineates the network into two regions. Layers before the transition layer use localized updates, while the transition layer and later layers use gradient-based updates. The trend in the weight updates made to the transition layer across epochs is used to determine how the boundary between SGD and localized updates is shifted in future epochs. We also propose a low-cost weak supervision mechanism by controlling the learning rate of localized updates based on the overall training loss. We applied LoCal+SGD to 8 image recognition CNNs (including ResNet50 and MobileNetV2) across 3 datasets (Cifar10, Cifar100 and ImageNet). Our measurements on a Nvidia GTX 1080Ti GPU demonstrate upto 1.5× improvement in end-to-end training time with ∼0.5% loss in Top-1 classification accuracy.
N/A
Training Deep Neural Networks (DNNs) places immense compute requirements on the underlying hardware platforms, expending large amounts of time and energy. We propose LoCal+SGD, a new algorithmic approach to accelerate DNN training by selectively combining localized or Hebbian learning within a Stochastic Gradient Descent (SGD) based training framework. Back-propagation is a computationally expensive process that requires 2 Generalized Matrix Multiply (GEMM) operations to compute the error and weight gradients for each layer. We alleviate this by selectively updating some layers’ weights using localized learning rules that require only 1 GEMM operation per layer. Further, since the weight update is performed during the forward pass itself, the layer activations for the mini-batch do not need to be stored until the backward pass, resulting in a reduced memory footprint. Localized updates can substantially boost training speed, but need to be used selectively and judiciously in order to preserve accuracy and convergence. We address this challenge through the design of a Learning Mode Selection Algorithm, where all layers start with SGD, and as epochs progress, layers gradually transition to localized learning. Specifically, for each epoch, the algorithm identifies a Localized→SGD transition layer, which delineates the network into two regions. Layers before the transition layer use localized updates, while the transition layer and later layers use gradient-based updates. The trend in the weight updates made to the transition layer across epochs is used to determine how the boundary between SGD and localized updates is shifted in future epochs. We also propose a low-cost weak supervision mechanism by controlling the learning rate of localized updates based on the overall training loss. We applied LoCal+SGD to 8 image recognition CNNs (including ResNet50 and MobileNetV2) across 3 datasets (Cifar10, Cifar100 and ImageNet). Our measurements on a Nvidia GTX 1080Ti GPU demonstrate upto 1.5× improvement in end-to-end training time with ∼0.5% loss in Top-1 classification accuracy.
1 INTRODUCTION
Deep Neural Networks (DNNs) have achieved continued success in many application domains involving images (Krizhevsky et al., 2017), videos (Ng et al., 2015), text (Zhou et al., 2015) and natural language (Goldberg & Hirst, 2017). However training state-of-the-art DNN models is computationally quite challenging, often requiring exa-FLOPs of compute as the models are quite complex and need to be trained using large datasets. Despite rapid improvements in the capabilities of GPUs and the advent of specialized accelerators, training large models using current platforms is still quite expensive and often takes days to even weeks. In this work, we aim to reduce the computational complexity of DNN training through a new algorithmic approach called LoCal+SGD1, which alleviates the key performance bottlenecks in Stochastic Gradient Descent (SGD) through selective use of localized or Hebbian learning.
Computational Bottlenecks in DNN Training. DNNs are trained in a supervised manner using gradient-descent based cost minimization techniques such as SGD (Bottou, 2010) or Adam (Kingma & Ba, 2015). The training inputs (typically grouped into minibatches) are iteratively forward propagated (FP ) and back propagated (BP ) through the DNN layers to compute weight updates that push the network parameters in the direction that decreases the overall classification loss.
1In addition to combining localized and SGD based learning, LoCal+SGD is Low-Calorie SGD or SGD with reduced computational requirements
Back-propagation is computationally expensive, accounting for 65-75% of the total training time on GPUs. This is attributed to two key factors: (i) BP involves 2 Generalized Matrix Multiply (GEMM) operations, one to propagate the error across layers and the other to compute the weight gradients, and (ii) when training on distributed systems using data/model parallelism(Dean et al., 2012b; Krizhevsky et al., 2012), aggregation of weight gradients/errors across devices incurs significant communication overhead. Further, BP through auxiliary ops such as batch normalization are also more expensive than FP .
Prior Efforts on Efficient DNN Training. Prior research efforts to improve DNN training time can be grouped into a few directions. One group of efforts enable larger scales of parallelism in DNN training through learning rate tuning (You et al., 2017a; Goyal et al., 2017; You et al., 2017b) and asynchronous weight updates (Dean et al., 2012a). Another class of efforts employ importancebased sample selection during training, wherein ‘easier’ training samples are selectively discarded to improve runtime (Jiang et al., 2019; Zhang et al., 2019). Finally, model quantization (Sun et al., 2019) and pruning (Lym et al., 2019) can lead to significant runtime benefits during training by enabling the use of reduced-bitwidth processing elements.
LoCal+SGD: Combining SGD with Localized Learning. Complementary to the aforementioned efforts, we propose a new approach, LoCal+SGD, to alleviate the performance bottlenecks in DNN training, while preserving model accuracy. Our hybrid approach combines Hebbian or localized learning (Hebb) with SGD by selectively applying it in specific layers and epochs. Localized learning rules (Hebb; Oja, 1982; Zhong, 2005) utilize a single feed-forward weight update to learn the feature representations, eschewing BP . Careful formulation of the localized learning rule can result in ∼2× computation savings compared to SGD and also significantly reduces memory footprint as activations from FP needed not be retained until BP . The reduction in memory footprint can in turn allow increasing the batch size during training, which leads to further runtime savings due to better compute utilization and reduced communication costs. It is worth noting that localized learning has been actively explored in the context of unsupervised learning (Chen et al., 2020; van den Oord et al., 2018; Hénaff et al., 2019). Further, there has been active research efforts on neuro-scientific learning rules (Lee et al., 2015; Nøkland, 2016). Our work is orthogonal to such efforts and represents a new application of localized learning in a fully supervised context, wherein we selectively combine it within an SGD framework to achieve computational savings.
Preserving model accuracy and convergence with LoCal+SGD requires localized updates to be applied judiciously i.e., only to selected layers in certain epochs. We address this challenge through the design of a learning mode selection algorithm. At the start training, the selection algorithm initializes the learning mode of all layers to SGD, and as training progresses determines the layers that transition to localized learning. Specifically, for each epoch, the algorithm identifies a Localized→SGD transition layer, which delineates the network into two regions. Layers before the transition layer use localized updates, while subsequent layers use gradient-based updates. This allows BP to stop at the transition layer, as layers before it have no use for the back-propagated errors. The algorithm takes advantage of the magnitude of the weight updates of the Localized→SGD transition layer in deciding the new position of the boundary every epoch. Further, we provide weak supervision by tweaking the learning rate of locally updated layers based on overall training loss.
Contributions: To the best of our knowledge, LoCal+SGD is the first effort that combines localized learning (an unsupervised learning technique) within a supervised SGD context to reduced computational costs while maintaining classification accuracy. This favorable tradeoff is achieved by LoCal+SGD through a Learning Mode Selection Algorithm that applies localized learning to selected layers and epochs. Further improvement is achieved through the use of weak supervision by modulating the learning rate of locally updated layers based on the overall training loss. Across 8 image recognition CNNs (including ResNet50 and MobileNet) and 3 datasets (Cifar10, Cifar100 and ImageNet), we demonstrate that LoCal+SGD achieves up to 1.5× improvement in training time with ∼0.5% Top-1 accuracy loss on a Nvidia GTX 1080Ti GPU.
2 LoCal+SGD: COMBINING SGD WITH SELECTIVE LOCALIZED LEARNING The key idea in LoCal+SGD is to apply localized learning to selected layers and epochs during DNN training to improve the overall execution time, without incurring loss in accuracy. The following components are critical to the effectiveness of LoCal+SGD:
• Localized Learning Rule Formulation. We formulate a computationally efficient localized learning rule and highlight the clear runtime benefits when compared to SGD. • Learning Mode Selection Algorithm. We propose a learning mode selection algorithm
that chooses between localized learning and SGD-based learning for each layer in every epoch, based on the potential impact on accuracy and computational benefits. • Weak Supervision. We propose a weak supervision technique, which comprises of a low-
cost supervision signal communicated to the localized learning layers in each epoch. The signal modulates the learning rates of these layers based on the rate of change of the overall classification loss.
In the following sub-sections, we describe the salient aspects of these components in greater detail.
2.1 EFFICIENT LOCALIZED LEARNING
Localized learning has been extensively explored in the context of unsupervised learning, demonstrating success on small (<= 3 layer) networks using relatively simpler datasets (e.g. MNIST, Cifar-10) (LeCun & Cortes, 2010; Krizhevsky et al., a)) with an accuracy gap that is yet to be bridged on larger datasets (e.g. ResNet50 or MobileNetV2 on ImageNet (Deng et al., 2009)). First proposed in (Hebb), the key intuition behind localized learning rules is to encourage correlations between neurons that have similar activation patterns. Equation 1 depicts the Hebbian weight update proposed in (Hebb), for a synapse with weight W , connecting a pair of input and output neurons whose activation values are represented by x and y respectively, with η as the learning rate.
4W = η · x · y (1) Considerable research has gone into evolving this equation over the years to improve the performance of localized learning (Oja, 1982; Zhong, 2005). However, many of the proposed rules are computationally complex, or are difficult to parallelize on modern hardware platforms such as GPUs and TPUs. Since our primarily goal is improving DNN training time, we adopt the computationally simple localized learning rule presented in Equation 1.
The learning rule in Equation 1 assumes a distinct synapse between each input and output neuron pair. While its application to fully-connected (fc) layers is straightforward, we need to consider the sharing of weights between neuron pairs in convolutional (conv) layers. For updating a shared weight of a conv layer, we calculate the individual updates due to each pair of pre- and post-synaptic neurons sharing the weight and sum all such updates. This essentially reduces to a convolution operation between the input and output activations of the layer and can be expressed by Equation 3 in Figure 1. For further computational efficiency improvement, unlike Equation 1, we consider the pre-activation-function values of the outputs i.e., zl instead of their post activation value al. Further, we normalize the localized update values as shown in Equation 4 of Figure 1, as it was observed to achieve better convergence in practice.
Overall, we utilize Equations 3 and 4 from Figure 1 to perform the weight updates in all layers that are earlier than the Localized→SGD transition layer during a certain epoch. All other layers continue to be updated using SGD-based BP , expressed by Equations 5-7 in Figure 1. SGD updates are applied to batch-normalization layers present after the Localized→SGD transition layer, and are otherwise skipped. Clearly, Equation 3 has the same computational complexity as Equation 6 of SGD-based BP for conv and fc layers. Thus, from Figure 1, we can directly infer that our localized learning rule will be considerable faster than SGD-based BP . In practice, we measured this improvement to be more than 2× on a NVIDIA GTX 1080Ti GPU for the ImageNet-ResNet50 benchmark, across all conv and fc layers. In addition to the computational complexity, the memory footprint of SGD-based
BP is also higher. This is because DNN software frameworks commonly store all activation values computed during FP to avoid recomputing al−1, the input activations to the layers, used in Equation 6 of SGD-based BP . In contrast, the localized update for a layer can be performed as soon as the FP through the layer is complete. The activation tensor al of layer L can be discarded or over-written as soon as FP proceeds to the next layer in the network, thereby freeing up a significant portion of on-device memory during training. In turn, this can allow larger minibatch sizes to be accommodated on a given hardware platform, when the localized updates are applied on a sufficient number of layers.
2.2 LEARNING MODE SELECTION ALGORITHM
The compute benefits of localized learning come at the cost of potential loss in classification accuracy with respect to SGD training. Thus, we utilize a learning mode selection algorithm to judiciously choose when and where to apply localized learning. The proposed algorithm identifies the learning mode of each layer at every epoch to maximize the runtime benefits, while incurring minimal losses in classification accuracy.
To design an efficient learning mode selection algorithm, we first study the effects of different spatiotemporal patterns of localized learning on the computational efficiency and classification accuracy of a network. We specifically investigate whether localized learning is more suitable for specific layers in the network and specific phases in the training process.
Impact on runtime efficiency: We first analyze the spatial trends, i.e., if locally updating specific layers in the network results in better runtime efficiency. In a particular epoch, if a convolutional layer L, updated with SGD precedes a convolutional layer K, that is updated locally, calculating the SGD-based error gradients of Layer L, i.e. δL, requires error propagation through the locally updated layer K. From a compute efficiency perspective, the benefits of using localized-updates in layer K completely vanish. Thus, it makes sense to partition the network into two regions - a prefix (set of initial layers) that are updated using localized learning, followed by layers that are updated with SGD. SGD-based BP is stopped at the junction of the two regions. Naturally, the compute benefits increase when the number of locally updated layers are higher and thus the boundary i.e., the Localized→SGD transition layer is moved deeper into the network. The impact of different temporal patterns on runtime efficiency is quite straightforward, with higher number of locally updated epochs leading to higher benefits. Further, as the compute complexity of localized updates is constant across different epochs, these benefits are agnostic of which particular epoch involves localized learning.
Impact on accuracy: To analyze the impact on accuracy, we first examine the nature of features learnt by different layers trained by SGD. It is commonly accepted that the initial layers of a network (Agrawal et al., 2014) perform feature extraction, while later layers aid in the classification process. As localized learning demonstrates better performance for feature extraction, applying it more aggressively, i.e for higher number of epochs, in the initial layers has a much smaller impact accuracy. However, for later layers in the network, the number of localized learning epochs should be progressively reduced to preserve accuracy.
Overall, based on the impact of localized learning on both runtime and accuracy, we find that a good learning mode selection algorithm should favor application of localized learning to a contiguous group of initial layers, while ensuring fewer or no localized learning epochs in later layers. We further
impose an additional constraint on top of this spatio-temporal pattern. Specifically, we allow each layer to transition from one learning mode to another at most once during the entire training process. We empirically observe that utilizing SGD as the initial learning mode allows the network to achieve a higher accuracy than utilizing localized learning as the initial mode. SGD essentially provides a better initialization point for all layers, and the subsequent use of localized updates enables the training to converge with good accuracy.
Algorithm 1 Learning Mode Selection Algorithm Input: TE (Index of the transition layer at epoch
E), ek (epochs since last transition), ||4WE || (L2 norm of the weight update of the transition layer at epoch E), K (minimum interval between transitions), tshift (number of layers to shift boundary) Output: TE+1 (Index of the transition layer at epoch E+1)
1: WAvg = 1K ∑e=E−1
e=E−K || 4We|| 2: if || 4WE || <= α ·WAvg and ek>=K 3: TE+1 = TE + tshift 4: ek = 0 5: else 6: TE+1 = TE 7: ek = ek + 1 In accordance with the above considerations, we propose a learning mode selection algorithm, described in Algorithm 1, that identifies the position of the boundary or the Localized→SGD transition layer every epoch. To that end, the algorithm analyzes the L2 norm of the SGD weight updates made to the Localized→SGD transition layer across epochs and determines whether the boundary can be shifted deeper into the network for the next epoch. In order to ensure stability in the training process, the algorithm moves the boundary at most once in every K epochs. It calculates the running average of the norm of the updates, Wavg, over the last K epochs (line 1). The boundary is shifted to the right only if the weight update in epoch E is within a fraction α of Wavg, and K epochs have transpired since the last transition (line 2). The rationale for this criterion is that sustained high magnitudes of weight updates in the transition layer indicate that they are potentially critical to accuracy, in which case the transition layer must continue being updated with SGD. If the criterion is not satisfied, the boundary remains stationary (line 5).
The value of α is set by analyzing the trends in the weight update magnitudes across the training process for different networks. The hyper-parameter tshift is set to the size of a recurring block, such as the residual blocks in ResNets and MobileNetV2. The hyper-parameter K is selected in a manner that ensures that localized updates are never applied beyond some fraction of the initial network layers. We denote this fraction as Lmax, and is set to 0.75 in all our experiments. Equation 2 is used to compute K for a network of L layers and a total training period of Emax epochs.
K = Emax
Lmax ∗ Ltshift (2)
In Figure 3, we plot the progression of the transition layer across the ResNet-34 and -50 benchmarks trained on the ImageNet dataset using LoCal+SGD. Interestingly, the weight update norm metric automatically modulates the rate at which the boundary progresses, as the boundary traverses the deeper layers at a slower rate.
2.3 WEAK SUPERVISION
To further bridge the accuracy gap between our approach and end-to-end SGD training, we introduce weak supervision in the locally updated layers. Unlike the SGD-updated layers, the locally updated layers in our approach cannot take advantage of the information provided by supervision, i.e., the classification error evaluated at the output. We utilize this supervised information through a low-cost weak supervision scheme that consists of a single signal sent to all layers updated locally in a particular epoch, and is derived from the classification loss observed over past few epochs. The weak supervision scheme is described in Algorithm 2.
The key principle behind the weak supervision scheme is to control the learning rates of the locally updated layers based on the rate at which the overall classification loss changes. For example, if the overall classification loss has increased across consecutive epochs, we reverse the direction of the
updates (line 3) in the next epoch. In contrast, the update direction is maintained if the overall loss is decreasing (line 5). We find that this weak supervision provides better accuracy results than other learning rate modulation techniques for the locally updated layers such as Adam or momentum-based updates.
Algorithm 2 Weak Supervision Scheme Input: Li (Overall classification loss at epoch
i), lrL (original learning rate of layer L) Output: WL (Weight update of layer L)
1: 4WL = conv(al−1, zl) 2: if Li−1 < Li 3: WL = WL - lrL · 4WL||4WL|| 4: else 5: WL = WL + lrL · 4WL||4WL|| We would like to highlight that traditional SGD provides fine-grained supervision and involves evaluating the error gradients for every neuron in the network. In contrast, the proposed weak supervision scheme provides coarse-grained supervision by forcing all weights to re-use the same loss information. Overall, our weak supervision scheme is not developed with the intent to compete with SGD updates, but is rather a simple, approximate and low-cost technique that brings the final accuracy of LoCal+SGD at par with end-to-end SGD training performance.
3 EXPERIMENTAL RESULTS
In this section, we present the results of our experiments highlighting the compute benefits achieved by LoCal+SGD. We evaluate the benefits across a suite of 8 image-recognition DNNs across 3 datasets. We consider the ResNet18 (He et al., 2015) and VGG13 (Simonyan & Zisserman, 2015) networks for the Cifar10 (Krizhevsky et al., a) and Cifar100 (Krizhevsky et al., b) datasets; and the ResNet34, ResNet50 (He et al., 2015) and MobileNetV2 (Sandler et al., 2018) networks for the ImageNet dataset (Deng et al., 2009). All experiments are conducted on Nvidia GTX 1080Ti GPUs with the batch size set to 64 per GPU, unless otherwise mentioned. Further experimental methodology details for the baseline and proposed approach are provided in the Appendix.
3.1 SINGLE GPU EXECUTION TIME BENEFITS
ImageNet: Table 1 presents the performance of the baseline (end-to-end SGD training) and the proposed LoCal+SGD algorithm on the ImageNet benchmarks in terms of the Top-1 classification error and runtime observed on a single GPU. For all benchmarks listed here, LoCal+SGD applies localized updates for nearly 50-60% of the layers. As can be seen, LoCal+SGD achieves upto∼1.4× reduction in runtime compared to to the baseline, while sacrificing <0.5% loss in Top-1 accuracy.
Table 1 also compares the performance of LoCal+SGD against existing research efforts designed to improve training efficiency. We perform this analysis against two efforts, namely (i) Training with stochastic depth (Huang et al., 2016) and (ii) Structured Pruning during Training (Lym et al., 2019). Training with stochastic depth, as the name suggests, stochastically bypasses residual blocks by propagating input activations/error gradients via identity or downsampling transformations, resulting in improved training time. However, the approach is targeted towards extremely deep networks and
as seen in Table 1, it incurs a noticeable accuracy loss on networks such as ResNet34, ResNet50 and MobileNetV2. Compared to training with stochastic depth, our proposal clearly achieves better accuracy as well as training runtime benefits. The key principle behind the pruning during training approach is to reduce the size of the weight and activation tensors in a structured manner during training, thereby providing speed-ups on GPU/TPU platforms. However, on complex benchmarks such as ResNet50, such techniques achieve speed-ups at the cost of significant drop in accuracy (∼ 1.5%). To further demonstrate the utility of localized updates in our approach, we consider a third technique, wherein layers selected to be updated locally for a given epoch are instead frozen, i.e., the parameters are held fixed during that epoch. While this achieves better runtime savings, it incurs considerably higher loss (∼1%) in accuracy, further underscoring the benefits of LoCal+SGD. CIFAR-10 and CIFAR-100: Table 2 presents the accuracy and corresponding compute benefits of the baseline and the proposed technique, as well as training with stochastic depth and layer freezing, for the CIFAR-10 and CIFAR-100 datasets. Stochastic depth is applicable only to residual blocks and is hence not considered for the VGG-13 network. Across benchmarks, we observe upto a 1.51× improvement in training runtime. Compared to the ImageNet benchmarks, LoCal+SGD applies localized updates more aggressively in the CIFAR-10 and CIFAR-100 benchmarks i.e., for more layers are updated locally for a higher number of epochs. This leads to the superior compute benefits of the proposed scheme on these benchmarks.
3.2 EXECUTION TIME BENEFITS FOR MULTI-GPU TRAINING
We analyze the memory footprint of the ResNet50 network when trained with LoCal+SGD on the ImageNet dataset. Training first commences with all layers updated with SGD, resulting in a high memory footprint. Due to the 10 GB capacity of the chosen GPU, the mini-batch size is set to 64 per GPU. As the Localized→SGD transition layer progresses across the network, the memory footprint required also gradually reduces across epochs. We take advantage of this reduction in memory footprint in the context of distributed training using 4 GPUs with data parallelism. Specifically, we extract additional runtime benefits by increasing the batch size on each GPU,
which reduces the frequency of gradient aggregation between devices and alleviates the communication overhead. At epoch 33, the memory footprint per GPU reduces to less than 5 GB, allowing training with an increased mini-batch size of 128 per GPU from epoch 33 onwards. The doubling of the batch-size provides an additional 6% runtime improvement, when measured across the entire training period. We note that other training techniques such as training with stochastic depth cannot exploit this feature, due to minimal reduction in memory footprint.
3.3 ABLATION ANALYSIS
As mentioned in Section 2, the hyper-parameters α, tshift and Lmax control the progression of the boundary across the network. Different values of either parameter can result in dif-
ferent learning mode configurations during training, resulting in different points in the computational efficiency vs. accuracy trade-off space. To understand the trade-off space between accuracy and runtime benefits, we now individually study the impact of each parameter.
Sp ee
d -U
p
Loss in Acc(%)→
Speed-Up
Sp ee
d -U
p →
Loss in Acc(%)→
(a) (b)
1
1.1
1.2
1.3
1.4
1.5
1.6
0 0.4 0.8 1.2 1.6 1
1.1
1.2
1.3
1.4
0 0.4 0.8 1.2 1.6
Figure 5: Compute efficiency vs. accuracy trade-off on the ImageNet dataset for a) ResNet50 and b) MobileNetV2
Impact of α : Figure 5 depicts the best compute benefits achieved for different α, for accuracy losses ranging from 0.1%-1.5% for the ResNet50 and MobileNetV2 benchmarks on ImageNet. On the ResNet50 benchmark, even while limiting the loss in accuracy 0.1%, LoCal+SGD achieves 1.1× speedup over traditional SGD. The speedups
increase to 1.38×-1.47× when around 1.5% loss in accuracy is tolerable.
To p
-1 E
rr o
r %
→
To p
-1 E
rr o
r %
→
Lmax % →Tshift % →
Accuracy
(a) (b)
1
1.1
1.2
1.3
1.4
20
25
30
35
1 4 7 10 13
R u
n ti
m e
Sa vi
n gs
→
Sp ee
d -U
p →
Speed-Up
1
1.2
1.4
1.6
22
24
26
28
10 30 50 70 90
curacy is largely stable in the regime of tshift between 5-12%, and begins to experience small degradations again when tshift exceeds 12%. These trends can be explained by analyzing the rate at which the transition layer progresses, and the number of layers transitioning to localized updates in an epoch for different tshift values. Smaller values of tshift (<3%) give rise to low values of k (∼1-2 epochs), the minimum number of epochs that must elapse before the transition layer can shift again. This results in fast progression of the transition layer across the network, leading to rapid changes in the learning mode at the boundary, thereby negatively impacting accuracy. In contrast, while larger tshift values (>12%) encourage slow progression of the boundary, a larger number of layers transition from SGD to localized updates in a single epoch, thereby impacting performance. We note here that in both cases, while α and Lmax can be tuned to control the progression and mitigate the loss in accuracy, the runtime savings is vastly reduced (<10%). Furthermore, for fixed values of Lmax and α, tshift is largely insensitive to runtime benefits, as the average number of layers updated with localized updates remains similar. Hence, for best accuracy and runtime benefits we set tshift in the range of 5-10% for all networks.
Impact of Lmax : Figure 6(b) depicts the impact of Lmax on accuracy for the ResNet50 network. For each Lmax, we identify the α and tshift that provide the best runtime benefits with minimal loss in accuracy (less than 0.5%). As with tshift, we denote Lmax as a percentage of the total network
depth. As seen in the figure, the degradation in accuracy increases slowly for Lmax in the initial layers - it is merely 0.1% at around Lmax = 30%, and increases to 0.4-0.5% for Lmax = 60-70%. However, the accuracy degradation sharply increases beyond 2% once Lmax exceeds 90% of the network depth. Further, runtime benefits generally increase with higher values of Lmax, for fixed tshift and α. Hence, for achieving a good accuracy versus runtime trade-off, we usually set Lmax to 75% for all networks.
4 RELATED WORK
This section discusses related research efforts to the proposed LoCal+ SGD training technique. These efforts can be broadly categorized into two classes. The first class of efforts focus on compute efficient DNN training. All efforts belonging to this class utilize gradient-descent algorithms to train the DNN model. These techniques are largely complementary to LoCal+SGD, as they can potentially be applied to the parts of the DNN model updated with SGD. In Section 3, we demonstrated how LoCal+SGD achieves superior accuracy versus computational efficiency trade-off than some of these efforts. Further, the second class of efforts involve neuro-scientific faithful learning rules, such as feedback alignment based efforts etc (Nøkland, 2016). Our work is orthogonal to such efforts, as we selectively combine localized learning rules with SGD for better computational efficiency.
We elucidate upon the different research efforts in both directions as follows.
Hyper-parameter tuning: Many notable algorithmic efforts are directed towards achieving training efficiency by controlling the hyper-parameters involved in gradient-descent, notably the learning rate. (You et al., 2017a; Akiba et al., 2017; Goyal et al., 2017; You et al., 2017b) propose learning rate tuning algorithms that achieve training in less than an hour with no loss in accuracy, when distributed to over hundreds of CPU/GPU cores.
Model size reduction during training: Model size reduction via pruning and quantization is a popular technique to reduce compute costs during inference. In many of these efforts, a dense or full precision model is re-trained or fine-tuned to obtain a pruned or quantized model. Recently, several efforts have also investigated dynamically pruning (Lym et al., 2019) or quantizing (Sun et al., 2019) a model during training itself. The reduction in model size results in training speed-ups. Taking a slightly different approach (Huang et al., 2016) proposes stochastically dropping residual blocks on extremely deep networks such as ResNet-1202, not only for training runtime benefits but also better accuracies due to improved gradient strength.
Instance importance based training: Recent research efforts have discovered that not all training samples are required for improving loss minimization during SGD training (Jiang et al., 2019; Zhang et al., 2019). That is, a sizable fraction of the samples can be skipped during several epochs, depending on their impact on the classification loss evaluated during FP . This translates to a reduction in mini-batches, providing considerable runtime benefits.
Neuro-scientific learning rules: Back-propagation algorithms utilized in DNN training are not biologically plausible, and do not explain how learning actually happens in the brain. To this end, there have been several efforts that develop biological faithful learning algorithms, and demonstrate considerable success on complex benchmarks including Cifar10 and ImageNet. For example, unlike conventional DNN training, feedback alignmnent algorithms (Nøkland, 2016) tackle the weight transport problem (Liao et al., 2015) by allowing for asymmetry in the weight values during forward and back propagation. Likewise, Target-Propagation (Lee et al., 2015) encourages neural activity to reach desired target activations evaluated during forward propagation itself, instead of utilizing loss gradients.
5 CONCLUSION
In this paper, we introduce a new approach to improve the training efficiency of state-of-the-art DNNs. Specifically, we take advantage of the computationally efficient nature of localized learning rules and selectively update some layers with these rules instead of SGD. We design an intelligent learning mode selection algorithm that determines the update method for the convolutional layers of the network in every epoch while maintaining the accuracy level and extracting maximum benefits. Further, we also implement a low-cost weak supervision scheme that brings the accuracy of the proposed scheme closer to traditional SGD training. Across a benchmark suite of 8 DNNs, we achieve upto 1.5× reduction in training times, as measured on a modern GPU platform.
6 APPENDIX
6.1 EXPERIMENTAL SETUP
This subsection describes the experimental setup used for realizing the baseline and proposed LoCal+SGD training schemes, on the benchmarks specified in Section 3 of the main paper. We conduct our experiments on the complete training and test datasets of each benchmark, using the PyTorch (Paszke et al., 2019) framework.
Baseline: We consider end-to-end SGD training as the baseline in our experiments. The hyperparameters used in SGD training of each of the benchmarks are described below.
ImageNet: For experiments in Section 3.1 we utilize a batch-size of 64 per GPU, for all benchmarks. For the ResNet50 and ResNet34 benchmarks the initial learning rate set to 0.025. The learning rate is decreased by 0.1 every 30 epochs, for a total training duration of 90 epochs, and the weight decay is 4e− 5. The MobileNetV2 benchmark utilizes an initial learning rate of 0.0125. We use a cosine learning rate decay schedule, as in (Li et al., 2019) for 150 epochs. The weight decay is set to 4e− 5. Both benchmarks use an input size of 224*224*3.
For the experiments in Section 3.2, the total batch-size at epoch 1 is 256 (64*4), with the initial learning rate set to 0.1 for the ResNet benchmarks and 0.05 for the MobileNetV2 benchmark. All other parameters remain the same.
Cifar10 and Cifar100: All Cifar10 and Cifar100 experiments utilize a batch-size of 64. The Cifar10 benchmarks are trained with an initial learning rate of 0.05 that is decayed by 0.1 every 10 epochs, across 90 epochs. The initial learning rate of the Cifar100 benchmarks is 0.025 and decayed by 0.5 every 20 epochs, for 150 epochs in total. The weight decay is set to 5e− 4. Both benchmarks utilize an input size of 32*32*3.
LoCal+SGD: In the proposed LoCal+SGD training scheme, the layers updated with SGD are trained with the same hyper-parameters used in the baseline implementation. Further, LoCal+SGD training is conducted using the same number of epochs as baseline SGD training. When a layer is updated locally, the initial learning rate is 0.01 and is decayed by a factor of 2 and 10 every 30 epochs, for the Cifar and the ImageNet benchmarks respectively. In all experiments, the α parameter is set to 0.95. We measure the accuracy and runtime of the proposed scheme for the same number of training epochs as the baseline implementations.
6.2 HYPER-PARAMETER TUNING
To realize LoCal+SGD, we introduce three hyper-parameters: α, tshift and Lmax. tshift controls the number of layers that switch to SGD-based updates every epoch, Lmax is the maximum number of layers that can be updated with localized learning rules, and α determines the position of the
transition layer every epoch by analyzing the gradient information at the boundary between the localized and SGD updates.
To obtain optimized values for these hyper-parameters, we first perform simple grid search using a single network for a particular dataset (for example, we choose the ResNet50 network for ImageNet). We transfer the same hyper-parameter values to other networks for the same dataset. We justify our use of common hyper-parameter values by the following experiment. In Table 4 below, we depict the results on other ImageNet benchmarks (ResNet34 and MobileNetV2) when hyper-parameter tuning is performed for each benchmark individually. As can be seen, the accuracy and runtime benefits are only marginally better than those obtained using a common set of hyper-parameters obtained by tuning on the ResNet50 benchmark. We thus utilize common values for a dataset, effectively rendering them constants. The time taken to obtain these constants is thus a one-time cost, and does not impact the speedups obtained by LoCal+SGD.
6.3 IMPACT OF WEAK SUPERVISION
In Table 5, we highlight the impact of the weak supervision technique on final classification accuracy. As can be seen, across all our benchmarks, the weak supervision technique clearly improves accuracy by nearly 0.06%-0.17%, bringing the final accuracy of LoCal+ SGD closer to baseline SGD.
6.4 ADDITIONAL COMPARATIVE ANALYSIS
In addition to the experiments performed in Section 3 to compare the performance of LoCal+SGD against existing techniques such as pruning during training (Lym et al., 2019) and training with stochastic depth (Huang et al., 2016), we conduct additional experiments to further solidify the superiority of our approach. We elucidate upon these comparisons as follows.
6.4.1 COMPARING LoCal+ SGD AGAINST SGD AT ISO-ACCURACY We compare the proposed LoCal+SGD training strategy against a SGD baseline that is trained with fewer epochs, i.e., the number of epochs required to reach the highest accuracy obtained by LoCal+ SGD across the total training periods listed in Section 6.1. For the ImageNet benchmarks, the runtime improvements are listed in Table 6 below. Clearly, LoCal+SGD continues to achieve significant speed-ups (around 1.25×) compared to the SGD baseline, even for complex benchmarks such as ResNet50 and MobileNetV2.
6.4.2 COMPARING LoCal+SGD AGAINST FREEZING LAYERS DURING TRAINING
In Section 3, we compare LoCal+SGD against a technique, freezing layers during training, wherein instead of updating the layers using localized learning, the weights are held fixed. In this section,
we perform a more thorough comparison of LoCal+SGD against freezing layers during training. Specifically, we perform this comparison at iso-runtime, and analyze the resulting accuracy of either approach. To elaborate, we first identify the LoCal+SGD configuration that can reach the best accuracy within 0.05%, 0.1%, 0.25%, 0.5% and 1% of the baseline SGD accuracy. Then, for the same runtimes taken by each LoCal+SGD configuration, we identify the configuration that provides the best accuracy for the freezing layers approach. Our results for the Cifar10 ResNet18 benchmark can be found in Table 7. LoCal+SGD performs superior to freezing layers during training on 3 out of the 5 configurations studied, i.e., is a superior technique when the loss compared to SGD is allowed to exceed 0.1%.
6.5 ANALYSIS OF STATIC SCHEDULES FOR LEARNING MODE SELECTION
The current LoCal+SGD framework is realized with the help of an automatic learning mode selection algorithm, which determines the position of the transition layer every epoch. Instead of a dynamic data-dependent algorithm, we investigate the benefits of using a static schedule - that is, the position of the transition layer is determined using some pre-defined scheduling function. To this end, we have implemented a simple static schedule that favors aggressive application of the localized learning rule in initial layers, and gradually decreases the number of epochs localized learning is applied in the deeper layers. As shown in Equation 3, we opt for a quadratic scheduling function, as we empirically observe they perform better compared to the linear functions studied. Here N determines the position of the transition layer every epoch, Emax is the maximum number of training epochs, and c1 and c2 are constants obtained using grid search.
N = bmax(0, c1 − c2 · (E − Emax)2)c (3)
We report the results using this static schedule in Table 8 for the ImageNet-ResNet50 and MobileNetV2 benchmarks. Compared to the results reported in Table 1, we find that the static schedule achieves slightly higher runtime benefits, for marginally lower accuracies. However, static schedules suffer from some drawbacks – several static scheduling functions are feasible, e.g. exponential, quadratic, etc., and identifying the best scheduling function for each network requires extensive empirical analysis. The learning mode selection algorithm utilized in the paper helps alleviate this by automatically identifying the position of the transition layer every epoch, leveraging the gradient information at the boundary between localized updates and SGD. | 1. What is the main contribution of the paper on DNN training efficiency?
2. What are the strengths and weaknesses of the proposed approach, particularly regarding its explanation and configuration?
3. Do you have any questions about the experimental results and their interpretation?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Review | Review
This work demonstrates that localized learning can improve DNN training efficiency by reducing computation and memory requirements. The effectiveness of this approach is shown by the experimental results that report a nice trade-off of minimal accuracy loss and good throughput improvement compared to baseline SGD and competing efficient training techniques. While the experimental results are indeed appealing, a major flaw is that the paper does not sufficiently explain why the underlying techniques (learning mode selection and weak supervision) work. Since these techniques are configured with a number of hyper-parameters, and I could not gain an intuition of how/why/whether they work in general. For example,
t
s
h
i
f
t
is set to recurring block size of residual nets, but what is not given is the justification for this choice, or how to set for non-residual nets. In other words, the hyperparameter settings for these techniques appear ad-hoc (I suspect that it is not), and so it is not clear to me how much exploration is required. What could have helped is to include a study of the incremental benefits of these techniques in the experiment section. In summary, while the results are good, the writing and presentation could be greatly improved to help readers learn and use the proposal. ` Pros
Tackles an important problem of reducing time and resource requirements of DNN training.
The general approach of computing layers differently over the course of training is quite intuitive.
Presents two techniques that appear to make localized learning practical and effective for DNN training.
Cons
The proposed techniques are parameterized (e.g.,
t
s
h
i
f
t
,
α
, etc.), but how, and effort required to configure them is not clear.
The proposed techniques are not sufficiently explained to help build intuition. For example, the weak supervision suggests that reversing the weight update direction can effectively reverse increase in classification loss (i.e., divergence), but it is not clear why this is the case or where this observation applies to SGD and other optimizers.
Incremental benefits of the techniques is not provided in evaluation. |
ICLR | Title
Accelerating DNN Training through Selective Localized Learning
Abstract
Training Deep Neural Networks (DNNs) places immense compute requirements on the underlying hardware platforms, expending large amounts of time and energy. We propose LoCal+SGD, a new algorithmic approach to accelerate DNN training by selectively combining localized or Hebbian learning within a Stochastic Gradient Descent (SGD) based training framework. Back-propagation is a computationally expensive process that requires 2 Generalized Matrix Multiply (GEMM) operations to compute the error and weight gradients for each layer. We alleviate this by selectively updating some layers’ weights using localized learning rules that require only 1 GEMM operation per layer. Further, since the weight update is performed during the forward pass itself, the layer activations for the mini-batch do not need to be stored until the backward pass, resulting in a reduced memory footprint. Localized updates can substantially boost training speed, but need to be used selectively and judiciously in order to preserve accuracy and convergence. We address this challenge through the design of a Learning Mode Selection Algorithm, where all layers start with SGD, and as epochs progress, layers gradually transition to localized learning. Specifically, for each epoch, the algorithm identifies a Localized→SGD transition layer, which delineates the network into two regions. Layers before the transition layer use localized updates, while the transition layer and later layers use gradient-based updates. The trend in the weight updates made to the transition layer across epochs is used to determine how the boundary between SGD and localized updates is shifted in future epochs. We also propose a low-cost weak supervision mechanism by controlling the learning rate of localized updates based on the overall training loss. We applied LoCal+SGD to 8 image recognition CNNs (including ResNet50 and MobileNetV2) across 3 datasets (Cifar10, Cifar100 and ImageNet). Our measurements on a Nvidia GTX 1080Ti GPU demonstrate upto 1.5× improvement in end-to-end training time with ∼0.5% loss in Top-1 classification accuracy.
N/A
Training Deep Neural Networks (DNNs) places immense compute requirements on the underlying hardware platforms, expending large amounts of time and energy. We propose LoCal+SGD, a new algorithmic approach to accelerate DNN training by selectively combining localized or Hebbian learning within a Stochastic Gradient Descent (SGD) based training framework. Back-propagation is a computationally expensive process that requires 2 Generalized Matrix Multiply (GEMM) operations to compute the error and weight gradients for each layer. We alleviate this by selectively updating some layers’ weights using localized learning rules that require only 1 GEMM operation per layer. Further, since the weight update is performed during the forward pass itself, the layer activations for the mini-batch do not need to be stored until the backward pass, resulting in a reduced memory footprint. Localized updates can substantially boost training speed, but need to be used selectively and judiciously in order to preserve accuracy and convergence. We address this challenge through the design of a Learning Mode Selection Algorithm, where all layers start with SGD, and as epochs progress, layers gradually transition to localized learning. Specifically, for each epoch, the algorithm identifies a Localized→SGD transition layer, which delineates the network into two regions. Layers before the transition layer use localized updates, while the transition layer and later layers use gradient-based updates. The trend in the weight updates made to the transition layer across epochs is used to determine how the boundary between SGD and localized updates is shifted in future epochs. We also propose a low-cost weak supervision mechanism by controlling the learning rate of localized updates based on the overall training loss. We applied LoCal+SGD to 8 image recognition CNNs (including ResNet50 and MobileNetV2) across 3 datasets (Cifar10, Cifar100 and ImageNet). Our measurements on a Nvidia GTX 1080Ti GPU demonstrate upto 1.5× improvement in end-to-end training time with ∼0.5% loss in Top-1 classification accuracy.
1 INTRODUCTION
Deep Neural Networks (DNNs) have achieved continued success in many application domains involving images (Krizhevsky et al., 2017), videos (Ng et al., 2015), text (Zhou et al., 2015) and natural language (Goldberg & Hirst, 2017). However training state-of-the-art DNN models is computationally quite challenging, often requiring exa-FLOPs of compute as the models are quite complex and need to be trained using large datasets. Despite rapid improvements in the capabilities of GPUs and the advent of specialized accelerators, training large models using current platforms is still quite expensive and often takes days to even weeks. In this work, we aim to reduce the computational complexity of DNN training through a new algorithmic approach called LoCal+SGD1, which alleviates the key performance bottlenecks in Stochastic Gradient Descent (SGD) through selective use of localized or Hebbian learning.
Computational Bottlenecks in DNN Training. DNNs are trained in a supervised manner using gradient-descent based cost minimization techniques such as SGD (Bottou, 2010) or Adam (Kingma & Ba, 2015). The training inputs (typically grouped into minibatches) are iteratively forward propagated (FP ) and back propagated (BP ) through the DNN layers to compute weight updates that push the network parameters in the direction that decreases the overall classification loss.
1In addition to combining localized and SGD based learning, LoCal+SGD is Low-Calorie SGD or SGD with reduced computational requirements
Back-propagation is computationally expensive, accounting for 65-75% of the total training time on GPUs. This is attributed to two key factors: (i) BP involves 2 Generalized Matrix Multiply (GEMM) operations, one to propagate the error across layers and the other to compute the weight gradients, and (ii) when training on distributed systems using data/model parallelism(Dean et al., 2012b; Krizhevsky et al., 2012), aggregation of weight gradients/errors across devices incurs significant communication overhead. Further, BP through auxiliary ops such as batch normalization are also more expensive than FP .
Prior Efforts on Efficient DNN Training. Prior research efforts to improve DNN training time can be grouped into a few directions. One group of efforts enable larger scales of parallelism in DNN training through learning rate tuning (You et al., 2017a; Goyal et al., 2017; You et al., 2017b) and asynchronous weight updates (Dean et al., 2012a). Another class of efforts employ importancebased sample selection during training, wherein ‘easier’ training samples are selectively discarded to improve runtime (Jiang et al., 2019; Zhang et al., 2019). Finally, model quantization (Sun et al., 2019) and pruning (Lym et al., 2019) can lead to significant runtime benefits during training by enabling the use of reduced-bitwidth processing elements.
LoCal+SGD: Combining SGD with Localized Learning. Complementary to the aforementioned efforts, we propose a new approach, LoCal+SGD, to alleviate the performance bottlenecks in DNN training, while preserving model accuracy. Our hybrid approach combines Hebbian or localized learning (Hebb) with SGD by selectively applying it in specific layers and epochs. Localized learning rules (Hebb; Oja, 1982; Zhong, 2005) utilize a single feed-forward weight update to learn the feature representations, eschewing BP . Careful formulation of the localized learning rule can result in ∼2× computation savings compared to SGD and also significantly reduces memory footprint as activations from FP needed not be retained until BP . The reduction in memory footprint can in turn allow increasing the batch size during training, which leads to further runtime savings due to better compute utilization and reduced communication costs. It is worth noting that localized learning has been actively explored in the context of unsupervised learning (Chen et al., 2020; van den Oord et al., 2018; Hénaff et al., 2019). Further, there has been active research efforts on neuro-scientific learning rules (Lee et al., 2015; Nøkland, 2016). Our work is orthogonal to such efforts and represents a new application of localized learning in a fully supervised context, wherein we selectively combine it within an SGD framework to achieve computational savings.
Preserving model accuracy and convergence with LoCal+SGD requires localized updates to be applied judiciously i.e., only to selected layers in certain epochs. We address this challenge through the design of a learning mode selection algorithm. At the start training, the selection algorithm initializes the learning mode of all layers to SGD, and as training progresses determines the layers that transition to localized learning. Specifically, for each epoch, the algorithm identifies a Localized→SGD transition layer, which delineates the network into two regions. Layers before the transition layer use localized updates, while subsequent layers use gradient-based updates. This allows BP to stop at the transition layer, as layers before it have no use for the back-propagated errors. The algorithm takes advantage of the magnitude of the weight updates of the Localized→SGD transition layer in deciding the new position of the boundary every epoch. Further, we provide weak supervision by tweaking the learning rate of locally updated layers based on overall training loss.
Contributions: To the best of our knowledge, LoCal+SGD is the first effort that combines localized learning (an unsupervised learning technique) within a supervised SGD context to reduced computational costs while maintaining classification accuracy. This favorable tradeoff is achieved by LoCal+SGD through a Learning Mode Selection Algorithm that applies localized learning to selected layers and epochs. Further improvement is achieved through the use of weak supervision by modulating the learning rate of locally updated layers based on the overall training loss. Across 8 image recognition CNNs (including ResNet50 and MobileNet) and 3 datasets (Cifar10, Cifar100 and ImageNet), we demonstrate that LoCal+SGD achieves up to 1.5× improvement in training time with ∼0.5% Top-1 accuracy loss on a Nvidia GTX 1080Ti GPU.
2 LoCal+SGD: COMBINING SGD WITH SELECTIVE LOCALIZED LEARNING The key idea in LoCal+SGD is to apply localized learning to selected layers and epochs during DNN training to improve the overall execution time, without incurring loss in accuracy. The following components are critical to the effectiveness of LoCal+SGD:
• Localized Learning Rule Formulation. We formulate a computationally efficient localized learning rule and highlight the clear runtime benefits when compared to SGD. • Learning Mode Selection Algorithm. We propose a learning mode selection algorithm
that chooses between localized learning and SGD-based learning for each layer in every epoch, based on the potential impact on accuracy and computational benefits. • Weak Supervision. We propose a weak supervision technique, which comprises of a low-
cost supervision signal communicated to the localized learning layers in each epoch. The signal modulates the learning rates of these layers based on the rate of change of the overall classification loss.
In the following sub-sections, we describe the salient aspects of these components in greater detail.
2.1 EFFICIENT LOCALIZED LEARNING
Localized learning has been extensively explored in the context of unsupervised learning, demonstrating success on small (<= 3 layer) networks using relatively simpler datasets (e.g. MNIST, Cifar-10) (LeCun & Cortes, 2010; Krizhevsky et al., a)) with an accuracy gap that is yet to be bridged on larger datasets (e.g. ResNet50 or MobileNetV2 on ImageNet (Deng et al., 2009)). First proposed in (Hebb), the key intuition behind localized learning rules is to encourage correlations between neurons that have similar activation patterns. Equation 1 depicts the Hebbian weight update proposed in (Hebb), for a synapse with weight W , connecting a pair of input and output neurons whose activation values are represented by x and y respectively, with η as the learning rate.
4W = η · x · y (1) Considerable research has gone into evolving this equation over the years to improve the performance of localized learning (Oja, 1982; Zhong, 2005). However, many of the proposed rules are computationally complex, or are difficult to parallelize on modern hardware platforms such as GPUs and TPUs. Since our primarily goal is improving DNN training time, we adopt the computationally simple localized learning rule presented in Equation 1.
The learning rule in Equation 1 assumes a distinct synapse between each input and output neuron pair. While its application to fully-connected (fc) layers is straightforward, we need to consider the sharing of weights between neuron pairs in convolutional (conv) layers. For updating a shared weight of a conv layer, we calculate the individual updates due to each pair of pre- and post-synaptic neurons sharing the weight and sum all such updates. This essentially reduces to a convolution operation between the input and output activations of the layer and can be expressed by Equation 3 in Figure 1. For further computational efficiency improvement, unlike Equation 1, we consider the pre-activation-function values of the outputs i.e., zl instead of their post activation value al. Further, we normalize the localized update values as shown in Equation 4 of Figure 1, as it was observed to achieve better convergence in practice.
Overall, we utilize Equations 3 and 4 from Figure 1 to perform the weight updates in all layers that are earlier than the Localized→SGD transition layer during a certain epoch. All other layers continue to be updated using SGD-based BP , expressed by Equations 5-7 in Figure 1. SGD updates are applied to batch-normalization layers present after the Localized→SGD transition layer, and are otherwise skipped. Clearly, Equation 3 has the same computational complexity as Equation 6 of SGD-based BP for conv and fc layers. Thus, from Figure 1, we can directly infer that our localized learning rule will be considerable faster than SGD-based BP . In practice, we measured this improvement to be more than 2× on a NVIDIA GTX 1080Ti GPU for the ImageNet-ResNet50 benchmark, across all conv and fc layers. In addition to the computational complexity, the memory footprint of SGD-based
BP is also higher. This is because DNN software frameworks commonly store all activation values computed during FP to avoid recomputing al−1, the input activations to the layers, used in Equation 6 of SGD-based BP . In contrast, the localized update for a layer can be performed as soon as the FP through the layer is complete. The activation tensor al of layer L can be discarded or over-written as soon as FP proceeds to the next layer in the network, thereby freeing up a significant portion of on-device memory during training. In turn, this can allow larger minibatch sizes to be accommodated on a given hardware platform, when the localized updates are applied on a sufficient number of layers.
2.2 LEARNING MODE SELECTION ALGORITHM
The compute benefits of localized learning come at the cost of potential loss in classification accuracy with respect to SGD training. Thus, we utilize a learning mode selection algorithm to judiciously choose when and where to apply localized learning. The proposed algorithm identifies the learning mode of each layer at every epoch to maximize the runtime benefits, while incurring minimal losses in classification accuracy.
To design an efficient learning mode selection algorithm, we first study the effects of different spatiotemporal patterns of localized learning on the computational efficiency and classification accuracy of a network. We specifically investigate whether localized learning is more suitable for specific layers in the network and specific phases in the training process.
Impact on runtime efficiency: We first analyze the spatial trends, i.e., if locally updating specific layers in the network results in better runtime efficiency. In a particular epoch, if a convolutional layer L, updated with SGD precedes a convolutional layer K, that is updated locally, calculating the SGD-based error gradients of Layer L, i.e. δL, requires error propagation through the locally updated layer K. From a compute efficiency perspective, the benefits of using localized-updates in layer K completely vanish. Thus, it makes sense to partition the network into two regions - a prefix (set of initial layers) that are updated using localized learning, followed by layers that are updated with SGD. SGD-based BP is stopped at the junction of the two regions. Naturally, the compute benefits increase when the number of locally updated layers are higher and thus the boundary i.e., the Localized→SGD transition layer is moved deeper into the network. The impact of different temporal patterns on runtime efficiency is quite straightforward, with higher number of locally updated epochs leading to higher benefits. Further, as the compute complexity of localized updates is constant across different epochs, these benefits are agnostic of which particular epoch involves localized learning.
Impact on accuracy: To analyze the impact on accuracy, we first examine the nature of features learnt by different layers trained by SGD. It is commonly accepted that the initial layers of a network (Agrawal et al., 2014) perform feature extraction, while later layers aid in the classification process. As localized learning demonstrates better performance for feature extraction, applying it more aggressively, i.e for higher number of epochs, in the initial layers has a much smaller impact accuracy. However, for later layers in the network, the number of localized learning epochs should be progressively reduced to preserve accuracy.
Overall, based on the impact of localized learning on both runtime and accuracy, we find that a good learning mode selection algorithm should favor application of localized learning to a contiguous group of initial layers, while ensuring fewer or no localized learning epochs in later layers. We further
impose an additional constraint on top of this spatio-temporal pattern. Specifically, we allow each layer to transition from one learning mode to another at most once during the entire training process. We empirically observe that utilizing SGD as the initial learning mode allows the network to achieve a higher accuracy than utilizing localized learning as the initial mode. SGD essentially provides a better initialization point for all layers, and the subsequent use of localized updates enables the training to converge with good accuracy.
Algorithm 1 Learning Mode Selection Algorithm Input: TE (Index of the transition layer at epoch
E), ek (epochs since last transition), ||4WE || (L2 norm of the weight update of the transition layer at epoch E), K (minimum interval between transitions), tshift (number of layers to shift boundary) Output: TE+1 (Index of the transition layer at epoch E+1)
1: WAvg = 1K ∑e=E−1
e=E−K || 4We|| 2: if || 4WE || <= α ·WAvg and ek>=K 3: TE+1 = TE + tshift 4: ek = 0 5: else 6: TE+1 = TE 7: ek = ek + 1 In accordance with the above considerations, we propose a learning mode selection algorithm, described in Algorithm 1, that identifies the position of the boundary or the Localized→SGD transition layer every epoch. To that end, the algorithm analyzes the L2 norm of the SGD weight updates made to the Localized→SGD transition layer across epochs and determines whether the boundary can be shifted deeper into the network for the next epoch. In order to ensure stability in the training process, the algorithm moves the boundary at most once in every K epochs. It calculates the running average of the norm of the updates, Wavg, over the last K epochs (line 1). The boundary is shifted to the right only if the weight update in epoch E is within a fraction α of Wavg, and K epochs have transpired since the last transition (line 2). The rationale for this criterion is that sustained high magnitudes of weight updates in the transition layer indicate that they are potentially critical to accuracy, in which case the transition layer must continue being updated with SGD. If the criterion is not satisfied, the boundary remains stationary (line 5).
The value of α is set by analyzing the trends in the weight update magnitudes across the training process for different networks. The hyper-parameter tshift is set to the size of a recurring block, such as the residual blocks in ResNets and MobileNetV2. The hyper-parameter K is selected in a manner that ensures that localized updates are never applied beyond some fraction of the initial network layers. We denote this fraction as Lmax, and is set to 0.75 in all our experiments. Equation 2 is used to compute K for a network of L layers and a total training period of Emax epochs.
K = Emax
Lmax ∗ Ltshift (2)
In Figure 3, we plot the progression of the transition layer across the ResNet-34 and -50 benchmarks trained on the ImageNet dataset using LoCal+SGD. Interestingly, the weight update norm metric automatically modulates the rate at which the boundary progresses, as the boundary traverses the deeper layers at a slower rate.
2.3 WEAK SUPERVISION
To further bridge the accuracy gap between our approach and end-to-end SGD training, we introduce weak supervision in the locally updated layers. Unlike the SGD-updated layers, the locally updated layers in our approach cannot take advantage of the information provided by supervision, i.e., the classification error evaluated at the output. We utilize this supervised information through a low-cost weak supervision scheme that consists of a single signal sent to all layers updated locally in a particular epoch, and is derived from the classification loss observed over past few epochs. The weak supervision scheme is described in Algorithm 2.
The key principle behind the weak supervision scheme is to control the learning rates of the locally updated layers based on the rate at which the overall classification loss changes. For example, if the overall classification loss has increased across consecutive epochs, we reverse the direction of the
updates (line 3) in the next epoch. In contrast, the update direction is maintained if the overall loss is decreasing (line 5). We find that this weak supervision provides better accuracy results than other learning rate modulation techniques for the locally updated layers such as Adam or momentum-based updates.
Algorithm 2 Weak Supervision Scheme Input: Li (Overall classification loss at epoch
i), lrL (original learning rate of layer L) Output: WL (Weight update of layer L)
1: 4WL = conv(al−1, zl) 2: if Li−1 < Li 3: WL = WL - lrL · 4WL||4WL|| 4: else 5: WL = WL + lrL · 4WL||4WL|| We would like to highlight that traditional SGD provides fine-grained supervision and involves evaluating the error gradients for every neuron in the network. In contrast, the proposed weak supervision scheme provides coarse-grained supervision by forcing all weights to re-use the same loss information. Overall, our weak supervision scheme is not developed with the intent to compete with SGD updates, but is rather a simple, approximate and low-cost technique that brings the final accuracy of LoCal+SGD at par with end-to-end SGD training performance.
3 EXPERIMENTAL RESULTS
In this section, we present the results of our experiments highlighting the compute benefits achieved by LoCal+SGD. We evaluate the benefits across a suite of 8 image-recognition DNNs across 3 datasets. We consider the ResNet18 (He et al., 2015) and VGG13 (Simonyan & Zisserman, 2015) networks for the Cifar10 (Krizhevsky et al., a) and Cifar100 (Krizhevsky et al., b) datasets; and the ResNet34, ResNet50 (He et al., 2015) and MobileNetV2 (Sandler et al., 2018) networks for the ImageNet dataset (Deng et al., 2009). All experiments are conducted on Nvidia GTX 1080Ti GPUs with the batch size set to 64 per GPU, unless otherwise mentioned. Further experimental methodology details for the baseline and proposed approach are provided in the Appendix.
3.1 SINGLE GPU EXECUTION TIME BENEFITS
ImageNet: Table 1 presents the performance of the baseline (end-to-end SGD training) and the proposed LoCal+SGD algorithm on the ImageNet benchmarks in terms of the Top-1 classification error and runtime observed on a single GPU. For all benchmarks listed here, LoCal+SGD applies localized updates for nearly 50-60% of the layers. As can be seen, LoCal+SGD achieves upto∼1.4× reduction in runtime compared to to the baseline, while sacrificing <0.5% loss in Top-1 accuracy.
Table 1 also compares the performance of LoCal+SGD against existing research efforts designed to improve training efficiency. We perform this analysis against two efforts, namely (i) Training with stochastic depth (Huang et al., 2016) and (ii) Structured Pruning during Training (Lym et al., 2019). Training with stochastic depth, as the name suggests, stochastically bypasses residual blocks by propagating input activations/error gradients via identity or downsampling transformations, resulting in improved training time. However, the approach is targeted towards extremely deep networks and
as seen in Table 1, it incurs a noticeable accuracy loss on networks such as ResNet34, ResNet50 and MobileNetV2. Compared to training with stochastic depth, our proposal clearly achieves better accuracy as well as training runtime benefits. The key principle behind the pruning during training approach is to reduce the size of the weight and activation tensors in a structured manner during training, thereby providing speed-ups on GPU/TPU platforms. However, on complex benchmarks such as ResNet50, such techniques achieve speed-ups at the cost of significant drop in accuracy (∼ 1.5%). To further demonstrate the utility of localized updates in our approach, we consider a third technique, wherein layers selected to be updated locally for a given epoch are instead frozen, i.e., the parameters are held fixed during that epoch. While this achieves better runtime savings, it incurs considerably higher loss (∼1%) in accuracy, further underscoring the benefits of LoCal+SGD. CIFAR-10 and CIFAR-100: Table 2 presents the accuracy and corresponding compute benefits of the baseline and the proposed technique, as well as training with stochastic depth and layer freezing, for the CIFAR-10 and CIFAR-100 datasets. Stochastic depth is applicable only to residual blocks and is hence not considered for the VGG-13 network. Across benchmarks, we observe upto a 1.51× improvement in training runtime. Compared to the ImageNet benchmarks, LoCal+SGD applies localized updates more aggressively in the CIFAR-10 and CIFAR-100 benchmarks i.e., for more layers are updated locally for a higher number of epochs. This leads to the superior compute benefits of the proposed scheme on these benchmarks.
3.2 EXECUTION TIME BENEFITS FOR MULTI-GPU TRAINING
We analyze the memory footprint of the ResNet50 network when trained with LoCal+SGD on the ImageNet dataset. Training first commences with all layers updated with SGD, resulting in a high memory footprint. Due to the 10 GB capacity of the chosen GPU, the mini-batch size is set to 64 per GPU. As the Localized→SGD transition layer progresses across the network, the memory footprint required also gradually reduces across epochs. We take advantage of this reduction in memory footprint in the context of distributed training using 4 GPUs with data parallelism. Specifically, we extract additional runtime benefits by increasing the batch size on each GPU,
which reduces the frequency of gradient aggregation between devices and alleviates the communication overhead. At epoch 33, the memory footprint per GPU reduces to less than 5 GB, allowing training with an increased mini-batch size of 128 per GPU from epoch 33 onwards. The doubling of the batch-size provides an additional 6% runtime improvement, when measured across the entire training period. We note that other training techniques such as training with stochastic depth cannot exploit this feature, due to minimal reduction in memory footprint.
3.3 ABLATION ANALYSIS
As mentioned in Section 2, the hyper-parameters α, tshift and Lmax control the progression of the boundary across the network. Different values of either parameter can result in dif-
ferent learning mode configurations during training, resulting in different points in the computational efficiency vs. accuracy trade-off space. To understand the trade-off space between accuracy and runtime benefits, we now individually study the impact of each parameter.
Sp ee
d -U
p
Loss in Acc(%)→
Speed-Up
Sp ee
d -U
p →
Loss in Acc(%)→
(a) (b)
1
1.1
1.2
1.3
1.4
1.5
1.6
0 0.4 0.8 1.2 1.6 1
1.1
1.2
1.3
1.4
0 0.4 0.8 1.2 1.6
Figure 5: Compute efficiency vs. accuracy trade-off on the ImageNet dataset for a) ResNet50 and b) MobileNetV2
Impact of α : Figure 5 depicts the best compute benefits achieved for different α, for accuracy losses ranging from 0.1%-1.5% for the ResNet50 and MobileNetV2 benchmarks on ImageNet. On the ResNet50 benchmark, even while limiting the loss in accuracy 0.1%, LoCal+SGD achieves 1.1× speedup over traditional SGD. The speedups
increase to 1.38×-1.47× when around 1.5% loss in accuracy is tolerable.
To p
-1 E
rr o
r %
→
To p
-1 E
rr o
r %
→
Lmax % →Tshift % →
Accuracy
(a) (b)
1
1.1
1.2
1.3
1.4
20
25
30
35
1 4 7 10 13
R u
n ti
m e
Sa vi
n gs
→
Sp ee
d -U
p →
Speed-Up
1
1.2
1.4
1.6
22
24
26
28
10 30 50 70 90
curacy is largely stable in the regime of tshift between 5-12%, and begins to experience small degradations again when tshift exceeds 12%. These trends can be explained by analyzing the rate at which the transition layer progresses, and the number of layers transitioning to localized updates in an epoch for different tshift values. Smaller values of tshift (<3%) give rise to low values of k (∼1-2 epochs), the minimum number of epochs that must elapse before the transition layer can shift again. This results in fast progression of the transition layer across the network, leading to rapid changes in the learning mode at the boundary, thereby negatively impacting accuracy. In contrast, while larger tshift values (>12%) encourage slow progression of the boundary, a larger number of layers transition from SGD to localized updates in a single epoch, thereby impacting performance. We note here that in both cases, while α and Lmax can be tuned to control the progression and mitigate the loss in accuracy, the runtime savings is vastly reduced (<10%). Furthermore, for fixed values of Lmax and α, tshift is largely insensitive to runtime benefits, as the average number of layers updated with localized updates remains similar. Hence, for best accuracy and runtime benefits we set tshift in the range of 5-10% for all networks.
Impact of Lmax : Figure 6(b) depicts the impact of Lmax on accuracy for the ResNet50 network. For each Lmax, we identify the α and tshift that provide the best runtime benefits with minimal loss in accuracy (less than 0.5%). As with tshift, we denote Lmax as a percentage of the total network
depth. As seen in the figure, the degradation in accuracy increases slowly for Lmax in the initial layers - it is merely 0.1% at around Lmax = 30%, and increases to 0.4-0.5% for Lmax = 60-70%. However, the accuracy degradation sharply increases beyond 2% once Lmax exceeds 90% of the network depth. Further, runtime benefits generally increase with higher values of Lmax, for fixed tshift and α. Hence, for achieving a good accuracy versus runtime trade-off, we usually set Lmax to 75% for all networks.
4 RELATED WORK
This section discusses related research efforts to the proposed LoCal+ SGD training technique. These efforts can be broadly categorized into two classes. The first class of efforts focus on compute efficient DNN training. All efforts belonging to this class utilize gradient-descent algorithms to train the DNN model. These techniques are largely complementary to LoCal+SGD, as they can potentially be applied to the parts of the DNN model updated with SGD. In Section 3, we demonstrated how LoCal+SGD achieves superior accuracy versus computational efficiency trade-off than some of these efforts. Further, the second class of efforts involve neuro-scientific faithful learning rules, such as feedback alignment based efforts etc (Nøkland, 2016). Our work is orthogonal to such efforts, as we selectively combine localized learning rules with SGD for better computational efficiency.
We elucidate upon the different research efforts in both directions as follows.
Hyper-parameter tuning: Many notable algorithmic efforts are directed towards achieving training efficiency by controlling the hyper-parameters involved in gradient-descent, notably the learning rate. (You et al., 2017a; Akiba et al., 2017; Goyal et al., 2017; You et al., 2017b) propose learning rate tuning algorithms that achieve training in less than an hour with no loss in accuracy, when distributed to over hundreds of CPU/GPU cores.
Model size reduction during training: Model size reduction via pruning and quantization is a popular technique to reduce compute costs during inference. In many of these efforts, a dense or full precision model is re-trained or fine-tuned to obtain a pruned or quantized model. Recently, several efforts have also investigated dynamically pruning (Lym et al., 2019) or quantizing (Sun et al., 2019) a model during training itself. The reduction in model size results in training speed-ups. Taking a slightly different approach (Huang et al., 2016) proposes stochastically dropping residual blocks on extremely deep networks such as ResNet-1202, not only for training runtime benefits but also better accuracies due to improved gradient strength.
Instance importance based training: Recent research efforts have discovered that not all training samples are required for improving loss minimization during SGD training (Jiang et al., 2019; Zhang et al., 2019). That is, a sizable fraction of the samples can be skipped during several epochs, depending on their impact on the classification loss evaluated during FP . This translates to a reduction in mini-batches, providing considerable runtime benefits.
Neuro-scientific learning rules: Back-propagation algorithms utilized in DNN training are not biologically plausible, and do not explain how learning actually happens in the brain. To this end, there have been several efforts that develop biological faithful learning algorithms, and demonstrate considerable success on complex benchmarks including Cifar10 and ImageNet. For example, unlike conventional DNN training, feedback alignmnent algorithms (Nøkland, 2016) tackle the weight transport problem (Liao et al., 2015) by allowing for asymmetry in the weight values during forward and back propagation. Likewise, Target-Propagation (Lee et al., 2015) encourages neural activity to reach desired target activations evaluated during forward propagation itself, instead of utilizing loss gradients.
5 CONCLUSION
In this paper, we introduce a new approach to improve the training efficiency of state-of-the-art DNNs. Specifically, we take advantage of the computationally efficient nature of localized learning rules and selectively update some layers with these rules instead of SGD. We design an intelligent learning mode selection algorithm that determines the update method for the convolutional layers of the network in every epoch while maintaining the accuracy level and extracting maximum benefits. Further, we also implement a low-cost weak supervision scheme that brings the accuracy of the proposed scheme closer to traditional SGD training. Across a benchmark suite of 8 DNNs, we achieve upto 1.5× reduction in training times, as measured on a modern GPU platform.
6 APPENDIX
6.1 EXPERIMENTAL SETUP
This subsection describes the experimental setup used for realizing the baseline and proposed LoCal+SGD training schemes, on the benchmarks specified in Section 3 of the main paper. We conduct our experiments on the complete training and test datasets of each benchmark, using the PyTorch (Paszke et al., 2019) framework.
Baseline: We consider end-to-end SGD training as the baseline in our experiments. The hyperparameters used in SGD training of each of the benchmarks are described below.
ImageNet: For experiments in Section 3.1 we utilize a batch-size of 64 per GPU, for all benchmarks. For the ResNet50 and ResNet34 benchmarks the initial learning rate set to 0.025. The learning rate is decreased by 0.1 every 30 epochs, for a total training duration of 90 epochs, and the weight decay is 4e− 5. The MobileNetV2 benchmark utilizes an initial learning rate of 0.0125. We use a cosine learning rate decay schedule, as in (Li et al., 2019) for 150 epochs. The weight decay is set to 4e− 5. Both benchmarks use an input size of 224*224*3.
For the experiments in Section 3.2, the total batch-size at epoch 1 is 256 (64*4), with the initial learning rate set to 0.1 for the ResNet benchmarks and 0.05 for the MobileNetV2 benchmark. All other parameters remain the same.
Cifar10 and Cifar100: All Cifar10 and Cifar100 experiments utilize a batch-size of 64. The Cifar10 benchmarks are trained with an initial learning rate of 0.05 that is decayed by 0.1 every 10 epochs, across 90 epochs. The initial learning rate of the Cifar100 benchmarks is 0.025 and decayed by 0.5 every 20 epochs, for 150 epochs in total. The weight decay is set to 5e− 4. Both benchmarks utilize an input size of 32*32*3.
LoCal+SGD: In the proposed LoCal+SGD training scheme, the layers updated with SGD are trained with the same hyper-parameters used in the baseline implementation. Further, LoCal+SGD training is conducted using the same number of epochs as baseline SGD training. When a layer is updated locally, the initial learning rate is 0.01 and is decayed by a factor of 2 and 10 every 30 epochs, for the Cifar and the ImageNet benchmarks respectively. In all experiments, the α parameter is set to 0.95. We measure the accuracy and runtime of the proposed scheme for the same number of training epochs as the baseline implementations.
6.2 HYPER-PARAMETER TUNING
To realize LoCal+SGD, we introduce three hyper-parameters: α, tshift and Lmax. tshift controls the number of layers that switch to SGD-based updates every epoch, Lmax is the maximum number of layers that can be updated with localized learning rules, and α determines the position of the
transition layer every epoch by analyzing the gradient information at the boundary between the localized and SGD updates.
To obtain optimized values for these hyper-parameters, we first perform simple grid search using a single network for a particular dataset (for example, we choose the ResNet50 network for ImageNet). We transfer the same hyper-parameter values to other networks for the same dataset. We justify our use of common hyper-parameter values by the following experiment. In Table 4 below, we depict the results on other ImageNet benchmarks (ResNet34 and MobileNetV2) when hyper-parameter tuning is performed for each benchmark individually. As can be seen, the accuracy and runtime benefits are only marginally better than those obtained using a common set of hyper-parameters obtained by tuning on the ResNet50 benchmark. We thus utilize common values for a dataset, effectively rendering them constants. The time taken to obtain these constants is thus a one-time cost, and does not impact the speedups obtained by LoCal+SGD.
6.3 IMPACT OF WEAK SUPERVISION
In Table 5, we highlight the impact of the weak supervision technique on final classification accuracy. As can be seen, across all our benchmarks, the weak supervision technique clearly improves accuracy by nearly 0.06%-0.17%, bringing the final accuracy of LoCal+ SGD closer to baseline SGD.
6.4 ADDITIONAL COMPARATIVE ANALYSIS
In addition to the experiments performed in Section 3 to compare the performance of LoCal+SGD against existing techniques such as pruning during training (Lym et al., 2019) and training with stochastic depth (Huang et al., 2016), we conduct additional experiments to further solidify the superiority of our approach. We elucidate upon these comparisons as follows.
6.4.1 COMPARING LoCal+ SGD AGAINST SGD AT ISO-ACCURACY We compare the proposed LoCal+SGD training strategy against a SGD baseline that is trained with fewer epochs, i.e., the number of epochs required to reach the highest accuracy obtained by LoCal+ SGD across the total training periods listed in Section 6.1. For the ImageNet benchmarks, the runtime improvements are listed in Table 6 below. Clearly, LoCal+SGD continues to achieve significant speed-ups (around 1.25×) compared to the SGD baseline, even for complex benchmarks such as ResNet50 and MobileNetV2.
6.4.2 COMPARING LoCal+SGD AGAINST FREEZING LAYERS DURING TRAINING
In Section 3, we compare LoCal+SGD against a technique, freezing layers during training, wherein instead of updating the layers using localized learning, the weights are held fixed. In this section,
we perform a more thorough comparison of LoCal+SGD against freezing layers during training. Specifically, we perform this comparison at iso-runtime, and analyze the resulting accuracy of either approach. To elaborate, we first identify the LoCal+SGD configuration that can reach the best accuracy within 0.05%, 0.1%, 0.25%, 0.5% and 1% of the baseline SGD accuracy. Then, for the same runtimes taken by each LoCal+SGD configuration, we identify the configuration that provides the best accuracy for the freezing layers approach. Our results for the Cifar10 ResNet18 benchmark can be found in Table 7. LoCal+SGD performs superior to freezing layers during training on 3 out of the 5 configurations studied, i.e., is a superior technique when the loss compared to SGD is allowed to exceed 0.1%.
6.5 ANALYSIS OF STATIC SCHEDULES FOR LEARNING MODE SELECTION
The current LoCal+SGD framework is realized with the help of an automatic learning mode selection algorithm, which determines the position of the transition layer every epoch. Instead of a dynamic data-dependent algorithm, we investigate the benefits of using a static schedule - that is, the position of the transition layer is determined using some pre-defined scheduling function. To this end, we have implemented a simple static schedule that favors aggressive application of the localized learning rule in initial layers, and gradually decreases the number of epochs localized learning is applied in the deeper layers. As shown in Equation 3, we opt for a quadratic scheduling function, as we empirically observe they perform better compared to the linear functions studied. Here N determines the position of the transition layer every epoch, Emax is the maximum number of training epochs, and c1 and c2 are constants obtained using grid search.
N = bmax(0, c1 − c2 · (E − Emax)2)c (3)
We report the results using this static schedule in Table 8 for the ImageNet-ResNet50 and MobileNetV2 benchmarks. Compared to the results reported in Table 1, we find that the static schedule achieves slightly higher runtime benefits, for marginally lower accuracies. However, static schedules suffer from some drawbacks – several static scheduling functions are feasible, e.g. exponential, quadratic, etc., and identifying the best scheduling function for each network requires extensive empirical analysis. The learning mode selection algorithm utilized in the paper helps alleviate this by automatically identifying the position of the transition layer every epoch, leveraging the gradient information at the boundary between localized updates and SGD. | 1. What is the main contribution of the paper, and how does it aim to reduce CNN training time cost?
2. What is the proposed learning mode selection algorithm, and how does it work?
3. What is the criterion used in the learning mode selection algorithm, and how does it relate to the training process?
4. How does the reviewer suggest improving the experimental part of the paper?
5. What is the weak supervision scheme proposed in the paper, and why does the reviewer suggest evaluating its effect? | Review | Review
This paper try to leverage the benefit of Hebb learning to reduce CNN training time cost. In order to achieve this, a learning mode selection algorithm is proposed to progressively increase number of layers using Hebb learning. The writing of this paper is good and the idea is also interesting, however, the experimental part should be improved:
The criterion used in learning mode selection algorithm is the model-update norm of current epoch. If the norm is small enough, the transition layer index will be increased. A small model-update norm also means that current layer is nearly convergent. Could you just fix these layers to accelerate training? Yes, freezing layer exps are tried but the comparison is not fair in my opinion. When Hebb-Learning-Layers are frozen, the final accuracy drops, but the training speedup is improved. So if you freeze less layers to make training speedups of freezing-layer-training and Hebb-Learning same, what will the accuracy relationship be? Does the proposed method still outperforms freezing strategy?
A weak supervision scheme is proposed in this paper, but I did not find any experiments to evaluate its effect, could you add this part? |
ICLR | Title
Accelerating DNN Training through Selective Localized Learning
Abstract
Training Deep Neural Networks (DNNs) places immense compute requirements on the underlying hardware platforms, expending large amounts of time and energy. We propose LoCal+SGD, a new algorithmic approach to accelerate DNN training by selectively combining localized or Hebbian learning within a Stochastic Gradient Descent (SGD) based training framework. Back-propagation is a computationally expensive process that requires 2 Generalized Matrix Multiply (GEMM) operations to compute the error and weight gradients for each layer. We alleviate this by selectively updating some layers’ weights using localized learning rules that require only 1 GEMM operation per layer. Further, since the weight update is performed during the forward pass itself, the layer activations for the mini-batch do not need to be stored until the backward pass, resulting in a reduced memory footprint. Localized updates can substantially boost training speed, but need to be used selectively and judiciously in order to preserve accuracy and convergence. We address this challenge through the design of a Learning Mode Selection Algorithm, where all layers start with SGD, and as epochs progress, layers gradually transition to localized learning. Specifically, for each epoch, the algorithm identifies a Localized→SGD transition layer, which delineates the network into two regions. Layers before the transition layer use localized updates, while the transition layer and later layers use gradient-based updates. The trend in the weight updates made to the transition layer across epochs is used to determine how the boundary between SGD and localized updates is shifted in future epochs. We also propose a low-cost weak supervision mechanism by controlling the learning rate of localized updates based on the overall training loss. We applied LoCal+SGD to 8 image recognition CNNs (including ResNet50 and MobileNetV2) across 3 datasets (Cifar10, Cifar100 and ImageNet). Our measurements on a Nvidia GTX 1080Ti GPU demonstrate upto 1.5× improvement in end-to-end training time with ∼0.5% loss in Top-1 classification accuracy.
N/A
Training Deep Neural Networks (DNNs) places immense compute requirements on the underlying hardware platforms, expending large amounts of time and energy. We propose LoCal+SGD, a new algorithmic approach to accelerate DNN training by selectively combining localized or Hebbian learning within a Stochastic Gradient Descent (SGD) based training framework. Back-propagation is a computationally expensive process that requires 2 Generalized Matrix Multiply (GEMM) operations to compute the error and weight gradients for each layer. We alleviate this by selectively updating some layers’ weights using localized learning rules that require only 1 GEMM operation per layer. Further, since the weight update is performed during the forward pass itself, the layer activations for the mini-batch do not need to be stored until the backward pass, resulting in a reduced memory footprint. Localized updates can substantially boost training speed, but need to be used selectively and judiciously in order to preserve accuracy and convergence. We address this challenge through the design of a Learning Mode Selection Algorithm, where all layers start with SGD, and as epochs progress, layers gradually transition to localized learning. Specifically, for each epoch, the algorithm identifies a Localized→SGD transition layer, which delineates the network into two regions. Layers before the transition layer use localized updates, while the transition layer and later layers use gradient-based updates. The trend in the weight updates made to the transition layer across epochs is used to determine how the boundary between SGD and localized updates is shifted in future epochs. We also propose a low-cost weak supervision mechanism by controlling the learning rate of localized updates based on the overall training loss. We applied LoCal+SGD to 8 image recognition CNNs (including ResNet50 and MobileNetV2) across 3 datasets (Cifar10, Cifar100 and ImageNet). Our measurements on a Nvidia GTX 1080Ti GPU demonstrate upto 1.5× improvement in end-to-end training time with ∼0.5% loss in Top-1 classification accuracy.
1 INTRODUCTION
Deep Neural Networks (DNNs) have achieved continued success in many application domains involving images (Krizhevsky et al., 2017), videos (Ng et al., 2015), text (Zhou et al., 2015) and natural language (Goldberg & Hirst, 2017). However training state-of-the-art DNN models is computationally quite challenging, often requiring exa-FLOPs of compute as the models are quite complex and need to be trained using large datasets. Despite rapid improvements in the capabilities of GPUs and the advent of specialized accelerators, training large models using current platforms is still quite expensive and often takes days to even weeks. In this work, we aim to reduce the computational complexity of DNN training through a new algorithmic approach called LoCal+SGD1, which alleviates the key performance bottlenecks in Stochastic Gradient Descent (SGD) through selective use of localized or Hebbian learning.
Computational Bottlenecks in DNN Training. DNNs are trained in a supervised manner using gradient-descent based cost minimization techniques such as SGD (Bottou, 2010) or Adam (Kingma & Ba, 2015). The training inputs (typically grouped into minibatches) are iteratively forward propagated (FP ) and back propagated (BP ) through the DNN layers to compute weight updates that push the network parameters in the direction that decreases the overall classification loss.
1In addition to combining localized and SGD based learning, LoCal+SGD is Low-Calorie SGD or SGD with reduced computational requirements
Back-propagation is computationally expensive, accounting for 65-75% of the total training time on GPUs. This is attributed to two key factors: (i) BP involves 2 Generalized Matrix Multiply (GEMM) operations, one to propagate the error across layers and the other to compute the weight gradients, and (ii) when training on distributed systems using data/model parallelism(Dean et al., 2012b; Krizhevsky et al., 2012), aggregation of weight gradients/errors across devices incurs significant communication overhead. Further, BP through auxiliary ops such as batch normalization are also more expensive than FP .
Prior Efforts on Efficient DNN Training. Prior research efforts to improve DNN training time can be grouped into a few directions. One group of efforts enable larger scales of parallelism in DNN training through learning rate tuning (You et al., 2017a; Goyal et al., 2017; You et al., 2017b) and asynchronous weight updates (Dean et al., 2012a). Another class of efforts employ importancebased sample selection during training, wherein ‘easier’ training samples are selectively discarded to improve runtime (Jiang et al., 2019; Zhang et al., 2019). Finally, model quantization (Sun et al., 2019) and pruning (Lym et al., 2019) can lead to significant runtime benefits during training by enabling the use of reduced-bitwidth processing elements.
LoCal+SGD: Combining SGD with Localized Learning. Complementary to the aforementioned efforts, we propose a new approach, LoCal+SGD, to alleviate the performance bottlenecks in DNN training, while preserving model accuracy. Our hybrid approach combines Hebbian or localized learning (Hebb) with SGD by selectively applying it in specific layers and epochs. Localized learning rules (Hebb; Oja, 1982; Zhong, 2005) utilize a single feed-forward weight update to learn the feature representations, eschewing BP . Careful formulation of the localized learning rule can result in ∼2× computation savings compared to SGD and also significantly reduces memory footprint as activations from FP needed not be retained until BP . The reduction in memory footprint can in turn allow increasing the batch size during training, which leads to further runtime savings due to better compute utilization and reduced communication costs. It is worth noting that localized learning has been actively explored in the context of unsupervised learning (Chen et al., 2020; van den Oord et al., 2018; Hénaff et al., 2019). Further, there has been active research efforts on neuro-scientific learning rules (Lee et al., 2015; Nøkland, 2016). Our work is orthogonal to such efforts and represents a new application of localized learning in a fully supervised context, wherein we selectively combine it within an SGD framework to achieve computational savings.
Preserving model accuracy and convergence with LoCal+SGD requires localized updates to be applied judiciously i.e., only to selected layers in certain epochs. We address this challenge through the design of a learning mode selection algorithm. At the start training, the selection algorithm initializes the learning mode of all layers to SGD, and as training progresses determines the layers that transition to localized learning. Specifically, for each epoch, the algorithm identifies a Localized→SGD transition layer, which delineates the network into two regions. Layers before the transition layer use localized updates, while subsequent layers use gradient-based updates. This allows BP to stop at the transition layer, as layers before it have no use for the back-propagated errors. The algorithm takes advantage of the magnitude of the weight updates of the Localized→SGD transition layer in deciding the new position of the boundary every epoch. Further, we provide weak supervision by tweaking the learning rate of locally updated layers based on overall training loss.
Contributions: To the best of our knowledge, LoCal+SGD is the first effort that combines localized learning (an unsupervised learning technique) within a supervised SGD context to reduced computational costs while maintaining classification accuracy. This favorable tradeoff is achieved by LoCal+SGD through a Learning Mode Selection Algorithm that applies localized learning to selected layers and epochs. Further improvement is achieved through the use of weak supervision by modulating the learning rate of locally updated layers based on the overall training loss. Across 8 image recognition CNNs (including ResNet50 and MobileNet) and 3 datasets (Cifar10, Cifar100 and ImageNet), we demonstrate that LoCal+SGD achieves up to 1.5× improvement in training time with ∼0.5% Top-1 accuracy loss on a Nvidia GTX 1080Ti GPU.
2 LoCal+SGD: COMBINING SGD WITH SELECTIVE LOCALIZED LEARNING The key idea in LoCal+SGD is to apply localized learning to selected layers and epochs during DNN training to improve the overall execution time, without incurring loss in accuracy. The following components are critical to the effectiveness of LoCal+SGD:
• Localized Learning Rule Formulation. We formulate a computationally efficient localized learning rule and highlight the clear runtime benefits when compared to SGD. • Learning Mode Selection Algorithm. We propose a learning mode selection algorithm
that chooses between localized learning and SGD-based learning for each layer in every epoch, based on the potential impact on accuracy and computational benefits. • Weak Supervision. We propose a weak supervision technique, which comprises of a low-
cost supervision signal communicated to the localized learning layers in each epoch. The signal modulates the learning rates of these layers based on the rate of change of the overall classification loss.
In the following sub-sections, we describe the salient aspects of these components in greater detail.
2.1 EFFICIENT LOCALIZED LEARNING
Localized learning has been extensively explored in the context of unsupervised learning, demonstrating success on small (<= 3 layer) networks using relatively simpler datasets (e.g. MNIST, Cifar-10) (LeCun & Cortes, 2010; Krizhevsky et al., a)) with an accuracy gap that is yet to be bridged on larger datasets (e.g. ResNet50 or MobileNetV2 on ImageNet (Deng et al., 2009)). First proposed in (Hebb), the key intuition behind localized learning rules is to encourage correlations between neurons that have similar activation patterns. Equation 1 depicts the Hebbian weight update proposed in (Hebb), for a synapse with weight W , connecting a pair of input and output neurons whose activation values are represented by x and y respectively, with η as the learning rate.
4W = η · x · y (1) Considerable research has gone into evolving this equation over the years to improve the performance of localized learning (Oja, 1982; Zhong, 2005). However, many of the proposed rules are computationally complex, or are difficult to parallelize on modern hardware platforms such as GPUs and TPUs. Since our primarily goal is improving DNN training time, we adopt the computationally simple localized learning rule presented in Equation 1.
The learning rule in Equation 1 assumes a distinct synapse between each input and output neuron pair. While its application to fully-connected (fc) layers is straightforward, we need to consider the sharing of weights between neuron pairs in convolutional (conv) layers. For updating a shared weight of a conv layer, we calculate the individual updates due to each pair of pre- and post-synaptic neurons sharing the weight and sum all such updates. This essentially reduces to a convolution operation between the input and output activations of the layer and can be expressed by Equation 3 in Figure 1. For further computational efficiency improvement, unlike Equation 1, we consider the pre-activation-function values of the outputs i.e., zl instead of their post activation value al. Further, we normalize the localized update values as shown in Equation 4 of Figure 1, as it was observed to achieve better convergence in practice.
Overall, we utilize Equations 3 and 4 from Figure 1 to perform the weight updates in all layers that are earlier than the Localized→SGD transition layer during a certain epoch. All other layers continue to be updated using SGD-based BP , expressed by Equations 5-7 in Figure 1. SGD updates are applied to batch-normalization layers present after the Localized→SGD transition layer, and are otherwise skipped. Clearly, Equation 3 has the same computational complexity as Equation 6 of SGD-based BP for conv and fc layers. Thus, from Figure 1, we can directly infer that our localized learning rule will be considerable faster than SGD-based BP . In practice, we measured this improvement to be more than 2× on a NVIDIA GTX 1080Ti GPU for the ImageNet-ResNet50 benchmark, across all conv and fc layers. In addition to the computational complexity, the memory footprint of SGD-based
BP is also higher. This is because DNN software frameworks commonly store all activation values computed during FP to avoid recomputing al−1, the input activations to the layers, used in Equation 6 of SGD-based BP . In contrast, the localized update for a layer can be performed as soon as the FP through the layer is complete. The activation tensor al of layer L can be discarded or over-written as soon as FP proceeds to the next layer in the network, thereby freeing up a significant portion of on-device memory during training. In turn, this can allow larger minibatch sizes to be accommodated on a given hardware platform, when the localized updates are applied on a sufficient number of layers.
2.2 LEARNING MODE SELECTION ALGORITHM
The compute benefits of localized learning come at the cost of potential loss in classification accuracy with respect to SGD training. Thus, we utilize a learning mode selection algorithm to judiciously choose when and where to apply localized learning. The proposed algorithm identifies the learning mode of each layer at every epoch to maximize the runtime benefits, while incurring minimal losses in classification accuracy.
To design an efficient learning mode selection algorithm, we first study the effects of different spatiotemporal patterns of localized learning on the computational efficiency and classification accuracy of a network. We specifically investigate whether localized learning is more suitable for specific layers in the network and specific phases in the training process.
Impact on runtime efficiency: We first analyze the spatial trends, i.e., if locally updating specific layers in the network results in better runtime efficiency. In a particular epoch, if a convolutional layer L, updated with SGD precedes a convolutional layer K, that is updated locally, calculating the SGD-based error gradients of Layer L, i.e. δL, requires error propagation through the locally updated layer K. From a compute efficiency perspective, the benefits of using localized-updates in layer K completely vanish. Thus, it makes sense to partition the network into two regions - a prefix (set of initial layers) that are updated using localized learning, followed by layers that are updated with SGD. SGD-based BP is stopped at the junction of the two regions. Naturally, the compute benefits increase when the number of locally updated layers are higher and thus the boundary i.e., the Localized→SGD transition layer is moved deeper into the network. The impact of different temporal patterns on runtime efficiency is quite straightforward, with higher number of locally updated epochs leading to higher benefits. Further, as the compute complexity of localized updates is constant across different epochs, these benefits are agnostic of which particular epoch involves localized learning.
Impact on accuracy: To analyze the impact on accuracy, we first examine the nature of features learnt by different layers trained by SGD. It is commonly accepted that the initial layers of a network (Agrawal et al., 2014) perform feature extraction, while later layers aid in the classification process. As localized learning demonstrates better performance for feature extraction, applying it more aggressively, i.e for higher number of epochs, in the initial layers has a much smaller impact accuracy. However, for later layers in the network, the number of localized learning epochs should be progressively reduced to preserve accuracy.
Overall, based on the impact of localized learning on both runtime and accuracy, we find that a good learning mode selection algorithm should favor application of localized learning to a contiguous group of initial layers, while ensuring fewer or no localized learning epochs in later layers. We further
impose an additional constraint on top of this spatio-temporal pattern. Specifically, we allow each layer to transition from one learning mode to another at most once during the entire training process. We empirically observe that utilizing SGD as the initial learning mode allows the network to achieve a higher accuracy than utilizing localized learning as the initial mode. SGD essentially provides a better initialization point for all layers, and the subsequent use of localized updates enables the training to converge with good accuracy.
Algorithm 1 Learning Mode Selection Algorithm Input: TE (Index of the transition layer at epoch
E), ek (epochs since last transition), ||4WE || (L2 norm of the weight update of the transition layer at epoch E), K (minimum interval between transitions), tshift (number of layers to shift boundary) Output: TE+1 (Index of the transition layer at epoch E+1)
1: WAvg = 1K ∑e=E−1
e=E−K || 4We|| 2: if || 4WE || <= α ·WAvg and ek>=K 3: TE+1 = TE + tshift 4: ek = 0 5: else 6: TE+1 = TE 7: ek = ek + 1 In accordance with the above considerations, we propose a learning mode selection algorithm, described in Algorithm 1, that identifies the position of the boundary or the Localized→SGD transition layer every epoch. To that end, the algorithm analyzes the L2 norm of the SGD weight updates made to the Localized→SGD transition layer across epochs and determines whether the boundary can be shifted deeper into the network for the next epoch. In order to ensure stability in the training process, the algorithm moves the boundary at most once in every K epochs. It calculates the running average of the norm of the updates, Wavg, over the last K epochs (line 1). The boundary is shifted to the right only if the weight update in epoch E is within a fraction α of Wavg, and K epochs have transpired since the last transition (line 2). The rationale for this criterion is that sustained high magnitudes of weight updates in the transition layer indicate that they are potentially critical to accuracy, in which case the transition layer must continue being updated with SGD. If the criterion is not satisfied, the boundary remains stationary (line 5).
The value of α is set by analyzing the trends in the weight update magnitudes across the training process for different networks. The hyper-parameter tshift is set to the size of a recurring block, such as the residual blocks in ResNets and MobileNetV2. The hyper-parameter K is selected in a manner that ensures that localized updates are never applied beyond some fraction of the initial network layers. We denote this fraction as Lmax, and is set to 0.75 in all our experiments. Equation 2 is used to compute K for a network of L layers and a total training period of Emax epochs.
K = Emax
Lmax ∗ Ltshift (2)
In Figure 3, we plot the progression of the transition layer across the ResNet-34 and -50 benchmarks trained on the ImageNet dataset using LoCal+SGD. Interestingly, the weight update norm metric automatically modulates the rate at which the boundary progresses, as the boundary traverses the deeper layers at a slower rate.
2.3 WEAK SUPERVISION
To further bridge the accuracy gap between our approach and end-to-end SGD training, we introduce weak supervision in the locally updated layers. Unlike the SGD-updated layers, the locally updated layers in our approach cannot take advantage of the information provided by supervision, i.e., the classification error evaluated at the output. We utilize this supervised information through a low-cost weak supervision scheme that consists of a single signal sent to all layers updated locally in a particular epoch, and is derived from the classification loss observed over past few epochs. The weak supervision scheme is described in Algorithm 2.
The key principle behind the weak supervision scheme is to control the learning rates of the locally updated layers based on the rate at which the overall classification loss changes. For example, if the overall classification loss has increased across consecutive epochs, we reverse the direction of the
updates (line 3) in the next epoch. In contrast, the update direction is maintained if the overall loss is decreasing (line 5). We find that this weak supervision provides better accuracy results than other learning rate modulation techniques for the locally updated layers such as Adam or momentum-based updates.
Algorithm 2 Weak Supervision Scheme Input: Li (Overall classification loss at epoch
i), lrL (original learning rate of layer L) Output: WL (Weight update of layer L)
1: 4WL = conv(al−1, zl) 2: if Li−1 < Li 3: WL = WL - lrL · 4WL||4WL|| 4: else 5: WL = WL + lrL · 4WL||4WL|| We would like to highlight that traditional SGD provides fine-grained supervision and involves evaluating the error gradients for every neuron in the network. In contrast, the proposed weak supervision scheme provides coarse-grained supervision by forcing all weights to re-use the same loss information. Overall, our weak supervision scheme is not developed with the intent to compete with SGD updates, but is rather a simple, approximate and low-cost technique that brings the final accuracy of LoCal+SGD at par with end-to-end SGD training performance.
3 EXPERIMENTAL RESULTS
In this section, we present the results of our experiments highlighting the compute benefits achieved by LoCal+SGD. We evaluate the benefits across a suite of 8 image-recognition DNNs across 3 datasets. We consider the ResNet18 (He et al., 2015) and VGG13 (Simonyan & Zisserman, 2015) networks for the Cifar10 (Krizhevsky et al., a) and Cifar100 (Krizhevsky et al., b) datasets; and the ResNet34, ResNet50 (He et al., 2015) and MobileNetV2 (Sandler et al., 2018) networks for the ImageNet dataset (Deng et al., 2009). All experiments are conducted on Nvidia GTX 1080Ti GPUs with the batch size set to 64 per GPU, unless otherwise mentioned. Further experimental methodology details for the baseline and proposed approach are provided in the Appendix.
3.1 SINGLE GPU EXECUTION TIME BENEFITS
ImageNet: Table 1 presents the performance of the baseline (end-to-end SGD training) and the proposed LoCal+SGD algorithm on the ImageNet benchmarks in terms of the Top-1 classification error and runtime observed on a single GPU. For all benchmarks listed here, LoCal+SGD applies localized updates for nearly 50-60% of the layers. As can be seen, LoCal+SGD achieves upto∼1.4× reduction in runtime compared to to the baseline, while sacrificing <0.5% loss in Top-1 accuracy.
Table 1 also compares the performance of LoCal+SGD against existing research efforts designed to improve training efficiency. We perform this analysis against two efforts, namely (i) Training with stochastic depth (Huang et al., 2016) and (ii) Structured Pruning during Training (Lym et al., 2019). Training with stochastic depth, as the name suggests, stochastically bypasses residual blocks by propagating input activations/error gradients via identity or downsampling transformations, resulting in improved training time. However, the approach is targeted towards extremely deep networks and
as seen in Table 1, it incurs a noticeable accuracy loss on networks such as ResNet34, ResNet50 and MobileNetV2. Compared to training with stochastic depth, our proposal clearly achieves better accuracy as well as training runtime benefits. The key principle behind the pruning during training approach is to reduce the size of the weight and activation tensors in a structured manner during training, thereby providing speed-ups on GPU/TPU platforms. However, on complex benchmarks such as ResNet50, such techniques achieve speed-ups at the cost of significant drop in accuracy (∼ 1.5%). To further demonstrate the utility of localized updates in our approach, we consider a third technique, wherein layers selected to be updated locally for a given epoch are instead frozen, i.e., the parameters are held fixed during that epoch. While this achieves better runtime savings, it incurs considerably higher loss (∼1%) in accuracy, further underscoring the benefits of LoCal+SGD. CIFAR-10 and CIFAR-100: Table 2 presents the accuracy and corresponding compute benefits of the baseline and the proposed technique, as well as training with stochastic depth and layer freezing, for the CIFAR-10 and CIFAR-100 datasets. Stochastic depth is applicable only to residual blocks and is hence not considered for the VGG-13 network. Across benchmarks, we observe upto a 1.51× improvement in training runtime. Compared to the ImageNet benchmarks, LoCal+SGD applies localized updates more aggressively in the CIFAR-10 and CIFAR-100 benchmarks i.e., for more layers are updated locally for a higher number of epochs. This leads to the superior compute benefits of the proposed scheme on these benchmarks.
3.2 EXECUTION TIME BENEFITS FOR MULTI-GPU TRAINING
We analyze the memory footprint of the ResNet50 network when trained with LoCal+SGD on the ImageNet dataset. Training first commences with all layers updated with SGD, resulting in a high memory footprint. Due to the 10 GB capacity of the chosen GPU, the mini-batch size is set to 64 per GPU. As the Localized→SGD transition layer progresses across the network, the memory footprint required also gradually reduces across epochs. We take advantage of this reduction in memory footprint in the context of distributed training using 4 GPUs with data parallelism. Specifically, we extract additional runtime benefits by increasing the batch size on each GPU,
which reduces the frequency of gradient aggregation between devices and alleviates the communication overhead. At epoch 33, the memory footprint per GPU reduces to less than 5 GB, allowing training with an increased mini-batch size of 128 per GPU from epoch 33 onwards. The doubling of the batch-size provides an additional 6% runtime improvement, when measured across the entire training period. We note that other training techniques such as training with stochastic depth cannot exploit this feature, due to minimal reduction in memory footprint.
3.3 ABLATION ANALYSIS
As mentioned in Section 2, the hyper-parameters α, tshift and Lmax control the progression of the boundary across the network. Different values of either parameter can result in dif-
ferent learning mode configurations during training, resulting in different points in the computational efficiency vs. accuracy trade-off space. To understand the trade-off space between accuracy and runtime benefits, we now individually study the impact of each parameter.
Sp ee
d -U
p
Loss in Acc(%)→
Speed-Up
Sp ee
d -U
p →
Loss in Acc(%)→
(a) (b)
1
1.1
1.2
1.3
1.4
1.5
1.6
0 0.4 0.8 1.2 1.6 1
1.1
1.2
1.3
1.4
0 0.4 0.8 1.2 1.6
Figure 5: Compute efficiency vs. accuracy trade-off on the ImageNet dataset for a) ResNet50 and b) MobileNetV2
Impact of α : Figure 5 depicts the best compute benefits achieved for different α, for accuracy losses ranging from 0.1%-1.5% for the ResNet50 and MobileNetV2 benchmarks on ImageNet. On the ResNet50 benchmark, even while limiting the loss in accuracy 0.1%, LoCal+SGD achieves 1.1× speedup over traditional SGD. The speedups
increase to 1.38×-1.47× when around 1.5% loss in accuracy is tolerable.
To p
-1 E
rr o
r %
→
To p
-1 E
rr o
r %
→
Lmax % →Tshift % →
Accuracy
(a) (b)
1
1.1
1.2
1.3
1.4
20
25
30
35
1 4 7 10 13
R u
n ti
m e
Sa vi
n gs
→
Sp ee
d -U
p →
Speed-Up
1
1.2
1.4
1.6
22
24
26
28
10 30 50 70 90
curacy is largely stable in the regime of tshift between 5-12%, and begins to experience small degradations again when tshift exceeds 12%. These trends can be explained by analyzing the rate at which the transition layer progresses, and the number of layers transitioning to localized updates in an epoch for different tshift values. Smaller values of tshift (<3%) give rise to low values of k (∼1-2 epochs), the minimum number of epochs that must elapse before the transition layer can shift again. This results in fast progression of the transition layer across the network, leading to rapid changes in the learning mode at the boundary, thereby negatively impacting accuracy. In contrast, while larger tshift values (>12%) encourage slow progression of the boundary, a larger number of layers transition from SGD to localized updates in a single epoch, thereby impacting performance. We note here that in both cases, while α and Lmax can be tuned to control the progression and mitigate the loss in accuracy, the runtime savings is vastly reduced (<10%). Furthermore, for fixed values of Lmax and α, tshift is largely insensitive to runtime benefits, as the average number of layers updated with localized updates remains similar. Hence, for best accuracy and runtime benefits we set tshift in the range of 5-10% for all networks.
Impact of Lmax : Figure 6(b) depicts the impact of Lmax on accuracy for the ResNet50 network. For each Lmax, we identify the α and tshift that provide the best runtime benefits with minimal loss in accuracy (less than 0.5%). As with tshift, we denote Lmax as a percentage of the total network
depth. As seen in the figure, the degradation in accuracy increases slowly for Lmax in the initial layers - it is merely 0.1% at around Lmax = 30%, and increases to 0.4-0.5% for Lmax = 60-70%. However, the accuracy degradation sharply increases beyond 2% once Lmax exceeds 90% of the network depth. Further, runtime benefits generally increase with higher values of Lmax, for fixed tshift and α. Hence, for achieving a good accuracy versus runtime trade-off, we usually set Lmax to 75% for all networks.
4 RELATED WORK
This section discusses related research efforts to the proposed LoCal+ SGD training technique. These efforts can be broadly categorized into two classes. The first class of efforts focus on compute efficient DNN training. All efforts belonging to this class utilize gradient-descent algorithms to train the DNN model. These techniques are largely complementary to LoCal+SGD, as they can potentially be applied to the parts of the DNN model updated with SGD. In Section 3, we demonstrated how LoCal+SGD achieves superior accuracy versus computational efficiency trade-off than some of these efforts. Further, the second class of efforts involve neuro-scientific faithful learning rules, such as feedback alignment based efforts etc (Nøkland, 2016). Our work is orthogonal to such efforts, as we selectively combine localized learning rules with SGD for better computational efficiency.
We elucidate upon the different research efforts in both directions as follows.
Hyper-parameter tuning: Many notable algorithmic efforts are directed towards achieving training efficiency by controlling the hyper-parameters involved in gradient-descent, notably the learning rate. (You et al., 2017a; Akiba et al., 2017; Goyal et al., 2017; You et al., 2017b) propose learning rate tuning algorithms that achieve training in less than an hour with no loss in accuracy, when distributed to over hundreds of CPU/GPU cores.
Model size reduction during training: Model size reduction via pruning and quantization is a popular technique to reduce compute costs during inference. In many of these efforts, a dense or full precision model is re-trained or fine-tuned to obtain a pruned or quantized model. Recently, several efforts have also investigated dynamically pruning (Lym et al., 2019) or quantizing (Sun et al., 2019) a model during training itself. The reduction in model size results in training speed-ups. Taking a slightly different approach (Huang et al., 2016) proposes stochastically dropping residual blocks on extremely deep networks such as ResNet-1202, not only for training runtime benefits but also better accuracies due to improved gradient strength.
Instance importance based training: Recent research efforts have discovered that not all training samples are required for improving loss minimization during SGD training (Jiang et al., 2019; Zhang et al., 2019). That is, a sizable fraction of the samples can be skipped during several epochs, depending on their impact on the classification loss evaluated during FP . This translates to a reduction in mini-batches, providing considerable runtime benefits.
Neuro-scientific learning rules: Back-propagation algorithms utilized in DNN training are not biologically plausible, and do not explain how learning actually happens in the brain. To this end, there have been several efforts that develop biological faithful learning algorithms, and demonstrate considerable success on complex benchmarks including Cifar10 and ImageNet. For example, unlike conventional DNN training, feedback alignmnent algorithms (Nøkland, 2016) tackle the weight transport problem (Liao et al., 2015) by allowing for asymmetry in the weight values during forward and back propagation. Likewise, Target-Propagation (Lee et al., 2015) encourages neural activity to reach desired target activations evaluated during forward propagation itself, instead of utilizing loss gradients.
5 CONCLUSION
In this paper, we introduce a new approach to improve the training efficiency of state-of-the-art DNNs. Specifically, we take advantage of the computationally efficient nature of localized learning rules and selectively update some layers with these rules instead of SGD. We design an intelligent learning mode selection algorithm that determines the update method for the convolutional layers of the network in every epoch while maintaining the accuracy level and extracting maximum benefits. Further, we also implement a low-cost weak supervision scheme that brings the accuracy of the proposed scheme closer to traditional SGD training. Across a benchmark suite of 8 DNNs, we achieve upto 1.5× reduction in training times, as measured on a modern GPU platform.
6 APPENDIX
6.1 EXPERIMENTAL SETUP
This subsection describes the experimental setup used for realizing the baseline and proposed LoCal+SGD training schemes, on the benchmarks specified in Section 3 of the main paper. We conduct our experiments on the complete training and test datasets of each benchmark, using the PyTorch (Paszke et al., 2019) framework.
Baseline: We consider end-to-end SGD training as the baseline in our experiments. The hyperparameters used in SGD training of each of the benchmarks are described below.
ImageNet: For experiments in Section 3.1 we utilize a batch-size of 64 per GPU, for all benchmarks. For the ResNet50 and ResNet34 benchmarks the initial learning rate set to 0.025. The learning rate is decreased by 0.1 every 30 epochs, for a total training duration of 90 epochs, and the weight decay is 4e− 5. The MobileNetV2 benchmark utilizes an initial learning rate of 0.0125. We use a cosine learning rate decay schedule, as in (Li et al., 2019) for 150 epochs. The weight decay is set to 4e− 5. Both benchmarks use an input size of 224*224*3.
For the experiments in Section 3.2, the total batch-size at epoch 1 is 256 (64*4), with the initial learning rate set to 0.1 for the ResNet benchmarks and 0.05 for the MobileNetV2 benchmark. All other parameters remain the same.
Cifar10 and Cifar100: All Cifar10 and Cifar100 experiments utilize a batch-size of 64. The Cifar10 benchmarks are trained with an initial learning rate of 0.05 that is decayed by 0.1 every 10 epochs, across 90 epochs. The initial learning rate of the Cifar100 benchmarks is 0.025 and decayed by 0.5 every 20 epochs, for 150 epochs in total. The weight decay is set to 5e− 4. Both benchmarks utilize an input size of 32*32*3.
LoCal+SGD: In the proposed LoCal+SGD training scheme, the layers updated with SGD are trained with the same hyper-parameters used in the baseline implementation. Further, LoCal+SGD training is conducted using the same number of epochs as baseline SGD training. When a layer is updated locally, the initial learning rate is 0.01 and is decayed by a factor of 2 and 10 every 30 epochs, for the Cifar and the ImageNet benchmarks respectively. In all experiments, the α parameter is set to 0.95. We measure the accuracy and runtime of the proposed scheme for the same number of training epochs as the baseline implementations.
6.2 HYPER-PARAMETER TUNING
To realize LoCal+SGD, we introduce three hyper-parameters: α, tshift and Lmax. tshift controls the number of layers that switch to SGD-based updates every epoch, Lmax is the maximum number of layers that can be updated with localized learning rules, and α determines the position of the
transition layer every epoch by analyzing the gradient information at the boundary between the localized and SGD updates.
To obtain optimized values for these hyper-parameters, we first perform simple grid search using a single network for a particular dataset (for example, we choose the ResNet50 network for ImageNet). We transfer the same hyper-parameter values to other networks for the same dataset. We justify our use of common hyper-parameter values by the following experiment. In Table 4 below, we depict the results on other ImageNet benchmarks (ResNet34 and MobileNetV2) when hyper-parameter tuning is performed for each benchmark individually. As can be seen, the accuracy and runtime benefits are only marginally better than those obtained using a common set of hyper-parameters obtained by tuning on the ResNet50 benchmark. We thus utilize common values for a dataset, effectively rendering them constants. The time taken to obtain these constants is thus a one-time cost, and does not impact the speedups obtained by LoCal+SGD.
6.3 IMPACT OF WEAK SUPERVISION
In Table 5, we highlight the impact of the weak supervision technique on final classification accuracy. As can be seen, across all our benchmarks, the weak supervision technique clearly improves accuracy by nearly 0.06%-0.17%, bringing the final accuracy of LoCal+ SGD closer to baseline SGD.
6.4 ADDITIONAL COMPARATIVE ANALYSIS
In addition to the experiments performed in Section 3 to compare the performance of LoCal+SGD against existing techniques such as pruning during training (Lym et al., 2019) and training with stochastic depth (Huang et al., 2016), we conduct additional experiments to further solidify the superiority of our approach. We elucidate upon these comparisons as follows.
6.4.1 COMPARING LoCal+ SGD AGAINST SGD AT ISO-ACCURACY We compare the proposed LoCal+SGD training strategy against a SGD baseline that is trained with fewer epochs, i.e., the number of epochs required to reach the highest accuracy obtained by LoCal+ SGD across the total training periods listed in Section 6.1. For the ImageNet benchmarks, the runtime improvements are listed in Table 6 below. Clearly, LoCal+SGD continues to achieve significant speed-ups (around 1.25×) compared to the SGD baseline, even for complex benchmarks such as ResNet50 and MobileNetV2.
6.4.2 COMPARING LoCal+SGD AGAINST FREEZING LAYERS DURING TRAINING
In Section 3, we compare LoCal+SGD against a technique, freezing layers during training, wherein instead of updating the layers using localized learning, the weights are held fixed. In this section,
we perform a more thorough comparison of LoCal+SGD against freezing layers during training. Specifically, we perform this comparison at iso-runtime, and analyze the resulting accuracy of either approach. To elaborate, we first identify the LoCal+SGD configuration that can reach the best accuracy within 0.05%, 0.1%, 0.25%, 0.5% and 1% of the baseline SGD accuracy. Then, for the same runtimes taken by each LoCal+SGD configuration, we identify the configuration that provides the best accuracy for the freezing layers approach. Our results for the Cifar10 ResNet18 benchmark can be found in Table 7. LoCal+SGD performs superior to freezing layers during training on 3 out of the 5 configurations studied, i.e., is a superior technique when the loss compared to SGD is allowed to exceed 0.1%.
6.5 ANALYSIS OF STATIC SCHEDULES FOR LEARNING MODE SELECTION
The current LoCal+SGD framework is realized with the help of an automatic learning mode selection algorithm, which determines the position of the transition layer every epoch. Instead of a dynamic data-dependent algorithm, we investigate the benefits of using a static schedule - that is, the position of the transition layer is determined using some pre-defined scheduling function. To this end, we have implemented a simple static schedule that favors aggressive application of the localized learning rule in initial layers, and gradually decreases the number of epochs localized learning is applied in the deeper layers. As shown in Equation 3, we opt for a quadratic scheduling function, as we empirically observe they perform better compared to the linear functions studied. Here N determines the position of the transition layer every epoch, Emax is the maximum number of training epochs, and c1 and c2 are constants obtained using grid search.
N = bmax(0, c1 − c2 · (E − Emax)2)c (3)
We report the results using this static schedule in Table 8 for the ImageNet-ResNet50 and MobileNetV2 benchmarks. Compared to the results reported in Table 1, we find that the static schedule achieves slightly higher runtime benefits, for marginally lower accuracies. However, static schedules suffer from some drawbacks – several static scheduling functions are feasible, e.g. exponential, quadratic, etc., and identifying the best scheduling function for each network requires extensive empirical analysis. The learning mode selection algorithm utilized in the paper helps alleviate this by automatically identifying the position of the transition layer every epoch, leveraging the gradient information at the boundary between localized updates and SGD. | 1. What are the strengths and weaknesses of the proposed algorithm for separating a network into two distinct regions?
2. How does the weak supervision rule improve localized learning, and what is the effect of this rule on performance?
3. How does the heuristic for learning mode selection complicate the algorithm, and is there a simpler way to select the learning mode?
4. How does the work relate to brain-like learning algorithms, efficient training, and model parallelism for massive networks?
5. What are some neuroscientific faithful learning rules, and how do they compare to the proposed method?
6. How does the work differentiate from pruning during training by initializing the neural network sparsely during initialization?
7. Are there any simple learning-rule-like schedules that can be used for selecting the learning mode?
8. How does the work scale giant models with conditional computation and automatic sharding?
9. How does the work compare to other lines of research, such as parameter efficient training of deep convolutional neural networks by dynamic sparse reparameterization?
10. How does the work impact the training of massive neural networks with trillions of parameters, and what are the potential applications of this innovation? | Review | Review
EDIT: 2020-11-27: Updated my score. Further explanations at the end of this post.
Summary: The authors propose an algorithm that separates a network into two distinct regions where one is trained with SGD while the other is trained with a local Hebbian learning rule. A weak supervision rule is proposed that improves localized learning. The authors demonstrate close to baseline performance on CIFAR-100 and ImageNet for diverse networks while speeding up training.
Strong points:
Very creative use of multiple learning rules during training.
People could never get localized learning to work well enough to compete with SGD. This is the first work that uses some localized learning and manages to come close. This is a big success.
These findings have broad applicability, from brain-like learning algorithms, efficient training on one GPU, and efficient learning algorithms for model parallelism for massive networks where communication is the bottleneck.
Weak points:
Missing ablation for the weak supervision algorithm.
This work is very impactful in many different ways, but it only mentions the computational efficiency perspective. The related work needs to be expanded to make readers aware of the impact of this work.
The heuristic for learning mode selection is complicated. Initial experiments suggested that a simple learning-rate-schedule-like way to select the learning mode would be possible.
Some algorithmic details in the weak supervision algorithm not clear.
Recommendation (short): This is very important work with results that will significantly impact many fields (efficient training, parallelization, brain-like algorithms). It is a creative solution to a significant problem in localized learning. I strongly recommend accepting this work. If this work is rejected, I will no longer review for future ICLR conferences. I recommend this work to be accepted as an oral presentation. Selecting this work for a best paper award would be appropriate.
Recommendation (long): This work is impactful for multiple reasons:
Localized learning is difficult. There has not been a work that uses localized learning and makes it work close to SGD performance on large datasets/models.
The brain does not use SGD, and it is difficult to think about algorithms that work in the brain and yield good performance. It might be that initial learning in the brain is done differently until local learning rules are used. This view is mostly ignored, but this paper yields evidence that such learning might be possible and efficient.
Layers updated with localized rules can be updated independently of other layers (if the weakly supervised rule is not used). This enables fully asynchronous training for early layers. There will be a synchronization point at the SGD layers, but through pipeline parallelism, the communication overhead can be hidden in Hebbian layers. This enables the training of massive neural networks with trillions of parameters. With current parallelism tools becoming more and more limited, this is a crucial innovation since previous asynchronous parallelism procedures always decrease predictive performance. This is the first work that shows a way to do asynchronous parallel training without performance degradation.
Beyond this, the paper also yields some speedups for tasks while decreasing predictive performance only slightly. This is also an impressive feat, but the overall broad insights this paper yields make it much more impactful than this result. As such, I do not view the experimental results as the main contribution, but overall, the papers' main contribution is that it shows a way to include (gradual) localized learning in a neural network while not impacting performance.
Comments for authors: This is excellent work — well done! I think the main weakness is currently a missing ablation on the effect of the weak supervision rule. You note that improves performance but by how much would be an important detail. If you do not include these ablations, I would still accept the paper, but I might rescind my oral presentation recommendation.
Another issue is that your work is relevant in many different domains, but you keep it confined to the idea that your method is only useful for faster training. I think making the reader aware that local training has many advantages across many domains could be very valuable. You do not need to elaborate on this, but I would like to see some of these connections in the paper because not everyone has the background to see these connections. I think you can do this mostly by mentioning it in the conclusion since you already mention a little bit of work in parallelism, and you mention previous results about local learning that failed to obtain good performance. Another line of work that I would mention in the related work section is that of neuroscientific faithful learning rules. The most relevant line of research is the work on various forms of feedback alignment and other algorithms. For a summary of past research and results on large datasets, see Bartunov et al., 2018[1]. Beyond this, you might want to add "sparse training" to the related work on efficient deep learning. Sparse training differentiates from pruning during training by initializing the neural network sparsely during initialization (not densely and then prune to sparse). See work on a mixture of experts[2,3] and sparse dynamic training[4,5,6]. I do not require you to include these references, but they might improve the related work section.
On the algorithmic and experimental side, it seems that a simple learning-rule-like schedule might be sufficient for selecting the learning mode. While I do not require you to add these experiments, it would make the algorithm simpler and more appealing if you can show that a simple learning rule works a la "warmup with SGD for 5 epochs, then shift by 1 layer (block) every 5 epochs" etc.
[1] Sartunov et al., 2018. Assessing the Scalability of Biologically-Motivated Deep Learning Algorithms and Architectures [2] Shazeer et al., 2017. Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer [3] Lepikhin et al., 2020. GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding [4] Mostafa & Wang et al., 2019. Parameter Efficient Training of Deep Convolutional Neural Networks by Dynamic Sparse Reparameterization. [5] Dettmers & Zettlemoyer et al., 2019. Sparse Networks from Scratch: Faster Training without Losing Performance. [6] Evci et al., 2019. Rigging the Lottery: Making All Tickets Winners.
Update:
Comments for Area Chair and Reviewers:
If I view this work merely by the story conveyed in the paper, my assessment would be more in line with the other reviewers. I am not quite sure if this is the right way to evaluate this paper since I view it as having a broader impact that goes beyond the story in the paper, but other reviewers disagree with my view on its broader impacts. I see this as a sign that the paper is currently not in a good enough state to really convey its potential impact.
Comments for authors:
I believe you still did good work here on the merits of the "speedup training" story that you convey in your paper. I believe that you have much more than this in your hands though. I think you could go two ways from here: (1) get this paper accepted in this form and work closely on the other angles that this work offers in a new paper, e.g. learning which is in line with biological or efficient parallelization of large networks. The second way (2) would be to rewrite this paper more in line with that view and resubmit. I think (1) might be better for you. I do not think many reviewers would understand a paper that comes from the process in (2). Good luck! |
ICLR | Title
Spanning Tree-based Graph Generation for Molecules
Abstract
In this paper, we explore the problem of generating molecules using deep neural networks, which has recently gained much interest in chemistry. To this end, we propose a spanning tree-based graph generation (STGG) framework based on formulating molecular graph generation as a construction of a spanning tree and the residual edges. Such a formulation exploits the sparsity of molecular graphs and allows using compact tree-constructive operations to define the molecular graph connectivity. Based on the intermediate graph structure of the construction process, our framework can constrain its generation to molecular graphs that satisfy the chemical valence rules. We also newly design a Transformer architecture with tree-based relative positional encodings for realizing the tree construction procedure. Experiments on QM9, ZINC250k, and MOSES benchmarks verify the effectiveness of the proposed framework in metrics such as validity, Fréchet ChemNet distance, and fragment similarity. We also demonstrate the usefulness of STGG in maximizing penalized LogP value of molecules.
1 INTRODUCTION
Researchers have extensively studied graph generative models, dating back to the early works of Erdös Rényi (Erdös et al., 1959). Recently, models based on deep neural networks (DNNs) have gained much attraction due to their expressive power in learning a graph dataset. The molecule-generating DNNs stand out among them for their success in the task of drug discovery.
Recent works have proposed molecule-generating DNNs based on string-based and graph-based representations (Segler et al., 2018; Jin et al., 2018; You et al., 2018; Shi et al., 2020; Jin et al., 2020). For example, Segler et al. (2018) proposed to train language models on the domain-specific linear string representation of molecules, i.e., simplified molecular-input line-entry system (SMILES, Weininger 1988). Since the string-based models ignore the inherent graph structure, recent works explore the graph-based generation that use (a) atom-by-atom (You et al., 2018; Shi et al., 2020; Luo et al., 2021) or (b) substructure-based (Jin et al., 2018; 2019; 2020) operations.
Notably, the substructure-based generative models (Jin et al., 2018; 2019; 2020) successfully exploit the molecular prior knowledge: the graphs are sparsely connected and can be represented as a junction tree with molecular substructure as building blocks. Based on such knowledge, the models use the junction tree construction operators which (a) require a fewer number of steps to generate the whole molecular graph and (b) guarantee generating molecules that satisfy the chemical valence rules. However, despite such advantages, a recent benchmark (Polykovskiy et al., 2020) suggests that they do not outperform the existing methods in terms of learning the data distribution, even when compared with the simple SMILES-based language models. We hypothesize that this is due to the models using a coarse-grained representation of the molecule and they may lack the ability to learn the inner semantics of each substructure-based building block.
Contribution. In this work, we propose a novel framework, coined spanning tree-based graph generation (STGG), for fine-grained generation of molecules while exploiting their sparsity.1 Mainly inspired from the SMILES representation of molecules, our idea is to generate the molecular graph
1While our framework is designed for general sparse graphs, we focus on the molecular graphs in this paper.
as a composition of a spanning tree and the corresponding residual edges with atoms and bonds as building blocks. Such a formulation allows our framework to utilize compact tree-constructive operations to define the molecular graph connectivity. See Figure 1 for an illustration of how we formulate the generation of a molecular graph as a sequence of tree-constructive operations.
Since our framework maintains the molecular graph structure during construction, it can pre-determine decisions that (a) violate the graph construction rule and (b) lead to molecules that violate the chemical valence rule. Such criteria allow control over the generative model to guarantee generating valid molecular graphs by forbidding invalid actions. This is in contrast to prior works (Shi et al., 2020; Luo et al., 2021) that generate the molecular graph atom-by-atom but determines the validity of construction operations through a sample-rejection scheme.
To recognize the spanning tree-based representation used in our STGG framework, we propose a Transformer architecture (Vaswani et al., 2017) with tree-based relative encoding. Inspired by recent works (Villmow et al., 2021; Lukovnikov & Fischer, 2021; Ying et al., 2021) on tree-based and graph-based Transformers, our framework expresses the relative position between two vertices as the number of forward and reverse edges in the shortest path between them. We also introduce an attention-based mechanism for constructing residual edges.
We experiment on popular graph generation benchmarks of QM9, ZINC250K, and MOSES to validate the effectiveness of our algorithm. In the experiments on QM9 and ZINC, our STGG framework outperforms the existing graph-based generative models by a large margin. In the MOSES benchmark, our algorithm achieves superior performance compared to both string-based and graphbased methods for majority of the metrics, e.g., Fréchet ChemNet distance (Preuer et al., 2018) and fragment-based similarity. We also conduct experiments on the offline optimization task for high penalized octanol-water partition coefficient and achieve competitive results.
2 SPANNING TREE-BASED GENERATION OF GRAPHS (STGG)
2.1 OVERVIEW
In this section, we introduce our spanning tree-based graph generation (STGG) framework to sequentially generate a molecule as a composition of a spanning tree and residual edges. To this end, we propose compact tree-constructive operations inspired by the simplified molecular-input line-entry system (SMILES, Weininger, 1988). In contrast to the existing SMILES-based molecular generative methods, our framework (a) allows inferring the intermediate graph structure and (b) is generally applicable to graph types other than molecules. In particular, (a) further enables our framework to control the construction process such that the sequential operations comply with tree-constructive grammar and only generate molecules satisfying the chemical valence rule.
Molecular graph representation. To apply our framework, we represent a molecule as a bipartite graph G = (A,B, E) where A and B are the set of vertices associated with atoms and bonds of the molecule, respectively.2 Each edge {a, b} ∈ E is assigned for each adjacent pair of atom and bond. We assign attributes xa ∈ Xatom and xb ∈ Xbond for vertices a ∈ A and b ∈ B to indicate the corresponding atom type and bond order, respectively. For example, {"C", "N", "O"} ⊆ Xatom and {"-", "="} ⊆ Xbond. See Figure 1 for an example of such a molecular graph representation. Molecular graph from sequence of decisions. To generate the molecular graph G = (A,B, E), our framework makes a sequence of decisions d1, . . . , dT to generate a spanning tree T = (AT ,BT , ET )
2Many existing works, e.g., (Shi et al., 2020), use a non-bipartite graph with bonds assigned to edges.
and a set of residual edges ER = E \ET . At each iteration, seven types of decisions are applicable, i.e., attach atom, attach bond, branch start, branch end, res atom, res bond, and terminate. See Table 1 for examples of decisions and the corresponding operations. We provide a detailed description of the graph construction process in Section 2.2.
Generating valid molecular graphs. Without any control, a model may generate decisions that (a) do not comply with the grammar of STGG or (b) leads to a molecule violating the chemical valence rule. To prevent this scenario, we conduct two criteria for determining validity of the given decision for (a) and (b). We further elaborate this in Section 2.3.
2.2 DECISION PROCESS FOR SPANNING TREE-BASED GRAPH GENERATION
We now explain how our STGG framework incorporates the decisions d1, . . . , dT to build the spanning tree T = (AT ,BT , ET ) and residual edges ER from scratch. To this end, our framework introduces the state information of (a) a pointer vertex ipoint ∈ AT ∪ BT for specifying the target of
Algorithm 1 Tree-based generation of molecular graphs 1: Input: sequence of decisions d1, . . . , dT . 2: Output: graph G = (A,B, E), atom attributes {xa}a∈A, and bond attributes {xb}b∈B 3: Set AT ← ∅, BT ← ∅, ET ← ∅, ER ← ∅, and T ← (AT ,BT , ET ). . Initialize the empty graph. 4: Set Lres as an empty list and Sbranch as an empty stack. 5: for t = 1, . . . , T do 6: if dt ∈ Xatom then . Add a new atom vertex. 7: Create a new atom vertex a and set A ← A∪ {b} and xa ← dt. 8: If |BT | > 0, set ET ← ET ∪ {{a, ipoint}}. . Edge is added when tree is not empty. 9: Set ipoint ← a.
10: if dt ∈ Xbond then . Add a new bond vertex. 11: Create a new bond vertex b and set ET ← ET ∪ {{b, ipoint}}, BT ← BT ∪ {b}, and xb ← dt. 12: Set ipoint ← b. 13: if dt = "*" then Insert ipoint into Lres. . Add pointer vertex into the queue. 14: if dt ∈ Lres then Pop dt from Lres and update ER ← ER ∪ {{ipoint, dt}}. . Add a new residual edge. 15: if dt = "(" then Insert ipoint into Sbranch. . Add pointer vertex into the stack. 16: if dt = ")" then Set ipoint ← pop(Sbranch) . Update pointer vertex from the stack. 17: Set A ← AT , B ← BT , and E ← ET ∪ ER.
the next operation, (b) a stack Sbranch that stores vertices to use later as a starting point of a “branch” in the spanning tree, and (c) a list Lres that stores vertices to use later for constructing residual edges. In what follows, we describe the seven types of operations, i.e., attach atom, attach bond, branch start, branch end, res atom, res bond, and terminate, corresponding to decision values d ∈ Xatom ∪ Xbond ∪ Lres ∪ {"(", ")", "*", "[eos]"} in detail. See Table 1 for the pairs of operations and the corresponding decisions. We also provide an example of the graph construction process in Figure 2.
Attaching atom and bond vertices to the spanning tree. If the decision d specifies one of the atom or bond attributes, i.e., d ∈ Xatom or d ∈ Xbond, it applies the corresponding attach atom and attach bond operations, respectively. To be specific, the attach atom operation adds a new atom vertex a into the spanning tree T as a neighbor of the pointer vertex ipointer, i.e., AT ← AT ∪ {a}, ET ← ET ∪ {a, ipointer}. The value d is set as the new atom attribute, i.e., xa ← d. The newly added vertex is set as the next pointer vertex, i.e., ipointer ← a. The attach bond operation similarly adds a new bond vertex. For example, a line graph can be expressed as a sequence of attach atom and attach bond operations, e.g., C-C-C where "C" ∈ XA and "-" ∈ XB. Branching out the spanning tree. To express graph structures with vertices of degree larger than two, our framework utilizes pairs of the branch start and the branch end operations with decision values of "(" and ")", respectively. To be specific, the branch start operation inserts the current pointer vertex into a stack Sbranch of vertices. Then the branch end operation pops a vertex from the stack Sbranch and sets it as the new pointer vertex. For an example, a graph with one atom vertex of degree three is constructed from a sequence of decisions C-C(-C)(-C).
Adding residual edges. To construct cyclic molecular graphs, our framework generates residual edges based on pairs of res atom operation and res bond operation, corresponding to decision values of "*" and d ∈ Lres, respectively. To be specific, the res atom operation inserts the current (atom) pointer vertex into a list Lres. Next, when the decision value d ∈ Lres is received for the res bond operation, the corresponding vertex d is popped from the list Lres and forms a new residual edge with the current (bond) pointer vertex, i.e., ER ← ER ∪ {{d, ipointer}}. For an example, a cyclic molecular graph is constructed from a sequence of decisions C*-O-O-1, where "1" indicates res bond operation with decision of the first atom vertex with attribute "C".
Termination. The decision "[eos]" applies the terminate operation to finish the construction. We provide the full algorithm in Algorithm 1. We also provide an algorithm to extract a sequence of decisions for constructing a given graph in Appendix A. Such an algorithm is used to obtain sequence of decisions as targets for training the generative model under the STGG framework.
2.3 MASKING OUT INVALID DECISIONS FOR A VALID MOLECULAR GRAPH
Based on Algorithm 1, we develop two criteria for determining whether if a sequence of decisions leads to (a) valid generation of a molecular graph and (b) generation of a molecule satisfying the valence rule. Such criteria are used to mask out invalid decisions to guarantee generating a valid molecular graph.
Validity of graph generation. To determine whether if a sequence of decisions lead to a valid generation of a molecular graph, we propose an algorithm that outputs a set of valid decisions given the previous decision d, stack of pointer vertices Sbranch, and list of atom vertices Lres during execution of Algorithm 1. In what follows, we provide a brief description of grammars enforced by the algorithm. We also provide the detailed algorithm in Appendix B.
• The branch end operation only appears when the stack of pointer vertex Sbranch is non-empty. • The operations res atom and res bond are atom-specific and bond-specific, hence they only
appear when the pointer vertex is located at an atom vertex and a bond vertex, respectively. • All the bond vertices have degree of two, hence branch start and branch end operations only
appear when the pointer vertex is located at an atom vertex. • The stack do not contain duplicates of the a pointer vertex at the same time.
Here, we note that our criteria for valid molecular graph generation do not enforce branches and rings to be closed, e.g., C*=C-C#N is allowed by our criteria. This does not violate the validity since our Algorithm 1 may still define a valid molecular graph by ignoring the open branches and the open rings during construction, i.e., C*=C-C#N generates a molecule identical to that of C=C-C#N.
Validity of satisfying the valence rule. To consider chemical validity of molecules, our framework offers the ability to constrain the its generation on molecules that satisfy the valence rule for each atom. That is, the generated graph G = (A,B, E) satisfies the constraint v(xa) ≥ ∑ b∈N (a) o(xb) for every atom vertex a ∈ A where v(xa) denotes the valence of an atom type xa and o(xb) denotes the bond order. To this end, we keep a record r(a) of available valence for each atom a ∈ A and update them for each decision. For example, when a bond vertex b is newly added, record of the neighboring atom vertex a is updated by r(a) ← v(a) − o(xb). The main idea is to forbid actions that lead to negative values of r(a). We provide a detailed algorithm in Appendix C.
3 TRANSFORMER ARCHITECTURE FOR TREE-BASED GENERATION
In this section, we describe our deep neural network architecture for generating sequence of decisions d1, . . . , dT under the STGG framework. To accurately recognize the decision process, we employ the tree-based relative positional encodings on the intermediate spanning tree T . We also introduce an attention mechanism to express a probability distribution over Lres which depends on intermediate state of the algorithm.
3.1 TREE-BASED POSITIONAL ENCODING FOR MULTI-HEAD ATTENTION LAYERS
Each intermediate layer in our model is a combination of a multi-head self-attention module and a position-wise feed-forward neural network similar to that of Vaswani et al. (2017). The main difference is on how we modify the architecture to incorporate tree-based positional encodings. To be specific, let H = [h>1 , . . . , h > T ] ∈ RT×` denote the input of a self-attention module where d is the hidden dimension and ht ∈ R1×` is the hidden representation at position t. The input H is projected by three matrices WQ ∈ R`×`K ,WK ∈ R`×`K and WV ∈ R`×`V to the corresponding representations Q,K and V , respectively. A single self-attention head is then calculated as
Q = HWQ, K = HWK , V = HWV , (1) A = QK>√ `K + P, Pt1,t2 = z (1) φforward(t1,t2) + z (2) φbackward(t1,t2) + z (3) φseq(t1,t2) , (2)
Attention(H) = SoftMax(M ◦A)V, (3) where Attention(H) is output of the attention head,M is the triangular mask to forbid the model from accessing future information while making a prediction, and ◦ denotes the element-wise multiplication between matrices.
Furthermore, P is the newly introduced relative positional encoding. It is a summation over the trainable embedding vectors z(1), z(2), z(3) indexed by relative position values of φforward(t1, t2), φbackward(t1, t2), and φseq(t1, t2). To be specific, the tree-based relative positions φforward(t1, t2) and φbackward(t1, t2) denotes the number of forward and backward edges in the spanning tree path between pointer vertices at the t1-th and t2-th time step. The direction of edge is decided by order of generation in the STGG framework. Such an encoding was inspired from recent works (Villmow et al., 2021; Lukovnikov & Fischer, 2021; Ying et al., 2021) using Transformers to recognize graphs and trees. Finally, the sequence-based relative position φseq(t1, t2) = t1 − t2 denotes the relative difference of time-steps for the decisions.
3.2 ATTENTION FOR UPDATING RESIDUAL EDGES.
Our model generates a categorical distribution over the space of X = Xatom ∪ Xbond ∪ Lres ∪ {"(", ")", "*", "[eos]"}. It is relatively straight-forward to output an unnormalized probability over values of Xatom ∪ Xbond ∪ {"(", ")", "*"} using a linear classifier on top of the Transformer model. However, it is non-trivial to assign probability values for res bond operation, i.e., decisions values of d ∈ Lres, since Lres varies between different time-steps. To handle this case, we use an attention-based mechanism for assigning unnormalized probability to decision values in the list Lres. To be specific, at the final layer of our model, we obtain the following probability distribution p(d).
p(d) ∝ { mg(d) ·mv(d) · exp(w>d h) ∀d ∈ Xatom ∪ Xbond ∪ {"(", ")", "*"}, mg(d) ·mv(d) · exp ( h>dW1W > 2 h ) ∀d ∈ Lres, (4) where wd ∈ R1×` is a decision-specific vector, W1,W2 ∈ R`× ˜̀ are weight matrices, and h is the decision embedding, i.e., output of the Transformer layer corresponding to the previously made decision. Furthermore, hd is the embedding corresponding to a past decision d ∈ Lres. Finally, mv(d),mg(d) are the mask for excluding invalid decisions that violate the validity of graph generation and valence rule, respectively. The masks are obtained using the criteria explained in Section 2.3. We use the mask during both training and evaluation of the model; this differs from existing graphgenerative models which forbid invalid decisions only at evaluation using a sample-rejection scheme.
4 RELATED WORKS
SMILES-based molecular generative models. Several studies proposed to generate a SMILES representation of molecules using string-based (Gómez-Bombarelli et al., 2016; Segler et al., 2018; Kim et al., 2021) or grammar-based (Kusner et al., 2017; Dai et al., 2018) models. While our STGG is largely inspired from such works, our STGG allows realizing the intermediate graph structure of the molecule being constructed while the SMILES-based models cannot. This difference allows the adoption of structure-aware deep neural networks to STGG. To be specific, the difference between STGG and the SMILES-based models appears from our newly introduced graph construction procedure using a pointer vertex ipoint, a vertex-list L, and a vertex-stack S . They allow recognizing an incomplete sequence of decisions as a graph and assigning positions to each decision. In contrast,
an incomplete SMILES string does not define a graph structure and assigning positions to each character is non-trivial.
Graph-based molecular generative models. Researchers have developed a large variety of molecular graph generation frameworks based on atom-wise and bond-wise operations (You et al., 2018; Kajino, 2019; Popova et al., 2019; Madhawa et al., 2019; Honda et al., 2019; Shi et al., 2020; Zang & Wang, 2020; Luo et al., 2021). Our STGG framework simplifies the decision space of such models by exploiting the tree-like graph structures of molecules. To be specific, STGG requires O(|A|+ |B|) decisions for constructing a molecule while the existing atom-by-atom graph generative models typically requireO(|A|2) decisions. This implies that our generative model requires a smaller number of decisions for sparse graphs like molecules, i.e., when |B| is small. Furthermore, our work is the first to successfully train a Transformer architecture (Vaswani et al., 2017) for graph-based molecule generation.
In another line of research, several works (Jin et al., 2018; 2019; 2020) proposed generative models based on using the junction-tree representation with molecular substructures as building blocks. Based on such a representation, such works utilize tree-constructive operations to generate the full graph. Since they operate on such a coarse-grained molecular representation, they typically require a fewer number of building blocks to generate the whole molecule. In comparison, our STGG framework utilizes a more fine-grained molecular representation and may additionally learn the inner semantics of substructures that are used as building blocks for the junction tree.
5 EXPERIMENT
In this section, we report the experimental results of the proposed spanning tree-based graph generation (STGG) framework. In Section 5.1 and 5.2, we compare with the existing graph generative models in the ZINC250K (Irwin et al., 2012) and QM9 (Ramakrishnan et al., 2014). We provide ablation studies on each component of our method using the ZINC250K dataset. In Section 5.2, we compare with the existing molecule generative models using the MOSES benchmark (Polykovskiy et al., 2020). Finally, in Section 5.3, we provide our results on the molecular optimization task with respect to the penalized octanol-water partition coefficient function (PLOGP). We provide the implementation details and illustration of the generated molecules in Appendix D and E, respectively.
5.1 MOLECULE GENERATION ON ZINC250K AND QM9 DATASETS
We first compare to the literature standard for the molecular generation task in the ZINC250K and the QM9 datasets. To this end, we train our generative model on the respective datasets and sample 10,000 molecules to measure (a) the ratio of valid molecules (VALID), (b) the ratio of unique molecules (UNIQUE), and (c) the ratio of novel molecules with respect to the training dataset (NOVEL). We compare with the numbers reported by recently proposed graph generative models (Shi et al., 2020; Luo et al., 2021). We also provide an additional baseline of a transformer architecture trained to generate the SMILES representation for the molecule (SMILES-TRANSFORMER).
We mark CORRECTABLE for methods which can optionally use a sample-rejection scheme to forbid decisions that violate the chemical rules at evaluation. Note that our framework can train the generative model under the valence correction mask during training, while existing graph generative
models use the valence correction only at evaluation. However, for comparison, we do not use the valence correction mask during training in this experiment.
We report the experimental results in Table 2. In the table, we observe that our STGG framework outperforms all the existing molecular graph generative models for high VALID at the cost of relatively lower NOVEL. In particular, our generative model can achieve a 100% ratio of valid molecules in the QM9 dataset even without any correction procedure. Such a result highlights the how our model can effectively learn the chemical rules and model the underlying distribution.
Finally, our STGG framework performing better than the SMILES-based transformer implies how the performance of our generative model stems from the STGG framework, rather than using the Transformer architecture.
Ablation studies. We also conduct ablation studies on the ZINC250k dataset to verify the effectiveness of our method. To this end, we report the experimental results of our method without specific components. To be specific, we ablate the effects of using sequential relative positional encoding (S), treebased relative positional encoding (T), graph-construction mask (G), and valence rule mask (V). We also consider an additional baseline of using the absolute positional encoding (A) as in the original Transformer architecture (Vaswani et al., 2017). In Table 3 and Figure 4, one can observe how each component of
our algorithm is crucial for achieving high VALID. In particular, the tree encoding is essential for the performance, showing the importance of tree-based representation that we use in our model.
5.2 MOLECULE GENERATION ON THE MOSES BENCHMARK
We also compare our method on the MOSES benchmark with the existing models. The MOSES benchmark offers a large collection of metrics to access the overall quality of generated molecules. To be specific, in addition to VALID, UNIQUE, NOVEL, we consider internal diversity of molecules (INTDIV), ratio of samples being accepted to chemical filters (FILTERS), Frétchet ChemNet Distance (FCD), nearest neighborhood similarity (SNN), frament similarity (FRAG), and Scaffold similarity
(SCAF). The similarity metrics of FCD, SNN, FRAG, SCAF are measured with respect to the test dataset of molecules and the scaffolds extracted from them.
In Table 4 and 5, we provide our experimental result. Here, one can observe how our algorithm outperforms the existing works for 10 out of 15 metrics including FILTERS, FCD-TEST, FCD-TESTSF, SNN-TEST, SNN-TESTSF, FRAG-TEST, FRAG-TESTSF, and SCAF-TEST. This highlights the ability of our STGG framework to successfully learn the training distribution.
5.3 MOLECULAR OPTIMIZATION FOR PENALIZED OCTANOL-WATER PARTITION COEFFICIENT
Finally, we demonstrate the usefulness of our STGG framework for the task of molecular optimization. To this end, we consider the literature standard of maximizing the penalized octanol-water partition coefficient (PLOGP). However, several works (Gao & Coley, 2020; Coley, 2020) have noted how the existing algorithms on this benchmark may not be practical, since PLOGP is ill-defined as a scoring function for molecules; this scoring function may assign high values to “unrealistic” molecules that are unstable and hard to synthesize in practice.
To consider this aspect, we propose a new algorithm which can control the quality of molecules by trading off scores and realistic-ness of molecules. Using this algorithm, we demonstrate how our STGG is capable of generating both (a) high-scoring molecules and (b) realistic molecules with a reasonably high score. At a high-level, we train a conditional generative model pθ(m|γ) under the STGG framework with PLOGP as the condition γ. At the test time, we sample from a high value γ to obtain high-scoring molecules. Such an algorithm is inspired from the recent offline reinforcement learning algorithms (Schmidhuber, 2019; Kumar & Levine, 2020; Chen et al., 2021; Janner et al., 2021). We fully describe our molecular optimization algorithm in Appendix F.
In Table 5 and Figure 6, we report the result of our molecular optimization experiment. We provide additional illustrations of the generated molecules in Appendix G. Here, our STGG model is able to generate molecules with considerably high PLOGP scores outside the training distribution. Furthermore, in Figure 6 and Appendix G, one can observe how increasing γ gradually changes the optimized molecule from realistic structures to large, chain-like, and unrealistic structures.3 Given such results, one may conclude that our STGG combined with the offline optimization algorithm can successfully make a trade-off between high PLOGP and realistic-ness of the generated molecules. However, we also remark that our results do not imply our optimization results to be strictly better than the baselines; we believe it is necessary to develop and incorporate quantitative measures for realistic-ness of molecules to fairly evaluate the molecular optimization algorithms. We believe such a research to be a important future direction.
6 CONCLUSION
In this paper, we propose STGG which is the first spanning tree-based framework for the generation of molecules using the Transformer architecture. The key idea of using the spanning tree for graph generation applies to any graph type outside the molecules; we believe such an extension of our work to be both promising and interesting. We also propose an offline algorithm for molecular optimization which allows the trade-off between the high score and the realistic-ness of molecules. We leave more investigation of the newly proposed optimization algorithm as future work.
3This is in agreement with prior works (Shi et al., 2020; Ahn et al., 2020; Luo et al., 2021).
7 REPRODUCIBILITY STATEMENT
We provide explicit description of our algorithm in Algorithm 1, Appendix 2, 3, and 4. We list the hyper-parameters, the hardware used for the experiments, and the data-processing information in Appendix D. We provide illustrations of the molecules generated for the experiments in Figure 6, and Appendix E, G. We submit the full implementation of our STGG framework and the baselines used in our experiments as a supplementary material.
A EXTRACTING SEQUENCE OF DECISIONS FROM A MOLECULAR GRAPH
In this section, we explain our algorithm for finding a sequence of decisions to construct a given molecular graph G = (A,B, E). The high-level idea is to first perform a depth-first search on G to find a spanning tree T = (A,B, ET ) and the corresponding set of residual edges ER = E\ET . Then the algorithm traverses the spanning tree T according to the depth-first search tree while (a) allocating branch start and branch end for vertices with degree higher than two and (b) adding res atom and res bond operations for any vertex covered by a residual edge {a, b} ∈ ER. To this end, we utilize a stack Sdfs that stores the list of vertices in G and branching tokens {"(", ")"} to visit. At each iteration, an element i of the stack Sdfs is popped. If i is a vertex, the algorithm adds the corresponding decision for attach atom and attach bond operations. If the vertex has more than two successors with respect to the spanning tree T , the successors are inserted into the stack Sdfs with surrounding "(" and ")" tokens. If the vertex has only one successor, the successor is inserted into the stack without an additional operation. When the branching tokens {"(", ")"} are popped from the stack, the algorithm adds the corresponding decision value to the sequence of decisions. We describe the full scheme in Algorithm 2.
Algorithm 2 Generating sequence of decisions for a molecular graphs 1: Input: graph G = (A,B, E), atom attributes {xa}a∈A, and bond attributes {xb}b∈B 2: Find a spanning tree T = (A,B, ET ) of G based on depth-first order and set ER ← E \ ET . 3: Initialize an empty sequence of decisions D. 4: Choose the root a ∈ A of T to insert in an empty stack Sbranch. 5: do 6: Pop i from Sbranch. 7: if i ∈ A ∪ B then 8: Append xi to D. . Decision to attach the atom vertex. 9: for j ∈ {j|j ∈ N (i), {i, j} ∈ ER} do . Decisions for residual edges. 10: If i ∈ A, append "*" to D. 11: If i ∈ B, append j to D. 12: Let V denote {j|j ∈ N (i), j 6∈ AT ∪ BT }. . Successors of i in depth-first order. 13: If |V| > 1, insert "(" to Sbranch. . Allocate decision to record pointer vertex. 14: Insert vertices in V to Sbranch. . Allocate successors to visit later. 15: If |V| > 1, insert ")" to Sbranch. . Allocate decision to return to the pointer vertex. 16: if i ∈ {"(", ")"} then 17: Append i to D. 18: while |Sbranch| > 0 19: Output: sequence of decisions D = d1, . . . , dT to reconstruct G.
B ALGORITHMS FOR GRAPH MASKING
To determine whether if a sequence of decisions lead to a valid generation of a molecular graph, we propose an algorithm that outputs a set of valid decisions given the current decision d, stack of pointer vertices S , and list of atom vertices L during execution of Algorithhm 1. We provide the full description in Algorithm 3.
Algorithm 3 Determination of grammar violation 1: Input: current decision d, stack Sbranch, and list Lres. 2: Output: List of candidate decisions D that are valid. 3: if d ∈ Xatom then 4: Set D ← Xbond. . When the atom vertex is followed by bond vertex. 5: Set D ← D ∪ {"("} . When the atom vertex has more than one successors. 6: Set D ← D ∪ {"*"} . When the atom vertex has a neighboring residual edge. 7: If |Sbranch| > 0, set D ← D ∪ {")"}. . The ")" decision appears only when Sbranch is non-empty 8: Set D ← D ∪ {"[eos]"}. . Allow termination. 9: if d ∈ Xbond then 10: Set D ← Xatom. . When the bond vertex is followed by atom vertex. 11: Set D ← D ∪ Lres . When the bond vertex has a neighboring residual edge. 12: if d = "*" then 13: Set D ← Xbond. . When the atom vertex is followed by bond vertex. 14: Set D ← D ∪ {"("} . When the atom vertex has more than one successors. 15: Set D ← D ∪ {"*"} . When the atom vertex has a neighboring residual edge. 16: If |Sbranch| > 0, set D ← D ∪ {")"}. . The ")" decision appears only when Sbranch is non-empty 17: Set D ← D ∪ {"[eos]"}. . Allow termination. 18: if d ∈ Lres then 19: Set D ← {)}. . Residual edge is only constructed at end of each branch. 20: if d = "(" then 21: Set D ← Xbond. . Branch always starts with a bond vertex. 22: Set D ← D ∪ {"[eos]"}. . Allow termination. 23: if d = ")" then 24: Set D ← Xbond ∪ {"(", ")"}. . Branch is followed by start or end of another branch. 25: Set D ← D ∪ {"[eos]"}. . Allow termination.
We also establish theoretical result on how the sequence of decisions generated from Algorithm 3 is always a valid sequence of decisions for Algorithm 1. To this end, we define a valid molecular graph as follows.
Definition 1. A valid molecular graph G = (A,B, E) is a connected bipartite graph where the number of vertices adjacent to any bond vertex b ∈ B is exactly two, i.e., |N (b)| = 2.
Such a definition implies how a molecule should have exactly two atoms connected to a bond. Combined with additional conditions to guarantee the well-behavior of Algorithm 1 on sequence of decisions, we obtain the following result.
Theorem 1. Let G = (A,B, E), S , and L be a graph, a stack of vertices, and a list of vertices being updated by Algorithm 1 and a sequence of decisions d1, . . . , dT . If the sequence of decisions satisfies the criteria defined by Algorithm 3, the following properties are satisfied.
P1 At the t-th step of Algorithm 1, |S| > 0 if dt = ")".
P2 At the t-th step of Algorithm 1, dt ∈ L if dt ∈ A ∪ B.
P3 When dT = [eos], the graph G is a valid molecular graph.
Here, P1 and P2 implies how the operations in Algorithm 1 are well-defined for d1, . . . , dT .
Proof. First, P1 is enforced by the step in Algorithm 3 which forbids the decision value of ")" when the stack S is empty. Next, P2 is enforced by the step selecting decision values from the current list of vertices L. To enforce P3, when dT = [eos], G (a) has to be a connected bipartite graph and (b) the number of vertices adjacent to any bond vertex has to be exactly two. For (a), Algorithm 3 allows the decision of dt ∈ Xatom ∪ L only when the pointer vertex is a bond vertex, i.e., d ∈ Xbond. Similarly, dt ∈ Xbond is allowed only when d ∈ Xatom ∪ {"*", "("}. For (b), the algorithm does not allow adding a bond vertex b ∈ B to the list of vertices L, which is required for any vertex with degree higher than two. Termination is not allowed when there exists a bond vertex with degree smaller than two.
C ALGORITHM FOR VALENCE MASKING
To consider chemical validity of molecules, our framework offers the ability to constrain its generation on molecules that satisfy the valence rule for each atom. That is, the generated graph G = (A,B, E) satisfies the constraint v(xa) ≥ ∑ b∈N (a) o(xb) for every atom vertex a ∈ A where v(xa) denotes the valence of an atom type xa and o(xb) denotes the bond order.
To this end, we propose an algorithm which iteratively updates a record r(a) of available valence for each atom vertex a ∈ A. The key idea is to (a) update the record accordingly for each addition of atom and bond orders and (b) pre-allocate valence for the branch start and res atom operations by the amount of minimum bond order minx∈Xbond o(x). The second part (b) is required since the branch start and res atom operations indicate future bond vertices to be added as a neighbor of the current atom vertex. We provide the full description in Algorithm 4.
Algorithm 4 Determination of valence rule violation
1: Input: Intermediate tree T = (AT ,BT , ET ), current pointer vertex ipoint, previous pointer vertex ĩpoint, current decision d, previous decision d̃, and record r(·) of available valence.
2: Output: Newly updated r and the list D of decisions that violates the valence rule. 3: if d ∈ Xatom then 4: Set r(ipoint)← v(d). . Initialize record by atom valence. 5: Set r(ipoint)← r(ipoint)− o(d̃). . Update record using previously added bond vertex. 6: Set D ← {x|x ∈ Xbond, o(x) > r(ipoint)}. . Reject bond orders higher than the record. 7: if r(ipoint) < minx∈Xbond o(x) then 8: Set D ← D ∪ {"(", "*"}. . Reject decisions requiring minimal amount of valence. 9: if d ∈ Xbond then 10: if d̃ 6= "(" then 11: Set r(̃ipoint)← r(̃ipoint)− o(d). . Update record of previously added atom vertex. 12: else 13: Set r(̃ipoint)← r(̃ipoint)− o(d) + minx∈Xbond o(x).
. Update previously added atom vertex considering pre-allocated valence.
14: Set D ← {x|x ∈ Xatom, v(x) < r(ipoint))}. . Reject atom valence lower than bond order. 15: Set D ← D ∪ {x|x ∈ Lres, r(x) < o(d)−minx∈Xbond o(x)}.
. Reject residual edge candidates with valence lower than bond order.
16: if d = "(" then 17: Set r(ipoint)← r(ipoint)−minx∈Xbond o(x). . Pre-allocate minimum bond order. 18: Set D ← {x|x ∈ Xbond, o(x) > r(ipoint)}. . Reject bond orders higher than the record. 19: if d = ")" then 20: Set D ← ∅ 21: if d = "*" then 22: Set r(ipoint)← r(ipoint)−minx∈Xbond o(x). . Pre-allocate minimum bond order. 23: Set D ← {x|x ∈ Xbond, o(x) > r(ipoint)}. . Reject bond orders higher than the record. 24: if d ∈ Lres then 25: Set r(d)← r(d) + minx∈Xbond o(x)− o(xipoint). . Update record of previously added atom vertex. 26: Set D ← ∅.
Given Algorithm 3 and 4, we establish the following theoretical guarantee. Definition 2. A valid molecular graph G = (A,B, E) satisfies the valency rule if v(xa) ≥∑ b∈N (a) o(xb) for every atom vertex a ∈ A.
Theorem 2. Let G = (A,B, E) be a graph being updated by Algorithm 1 and a sequence of decisions d1, . . . , dT . If the sequence of decisions satisfies the criteria defined by Algorithm 3 and 4, the corresponding graph G satisfies the valency rule.
Proof. To prove the validity of our algorithms, we show that (1) r(a) ≤ v(xa)− ∑ b∈N (a) o(xb) and (2) r(a) > 0 at any time-step applying Algorithm 4 to the sequence of decisions d1, . . . , dT .
For (1), we note that a record of an atom a is initialized as v(xa)− ∑ b∈N (a) o(xb) whenever it is newly added to the graph G by a decision. Furthermore, whenever a new edge is added by attach bond and res bond operation, the corresponding bond order o(xb) is deducted from the record. Importantly, a minimum bond order minx∈Xbond o(x) is also added to the record for a attach bond operation consecutive to a branch start operation or a res bond operation. We note how this does not harm (1) since the minimum bond order has already been deducted (or pre-allocated) by the corresponding branch start and res atom operations, respectively.
For the case of (2), one can observe that Algorithm 4 filters out the atoms and bonds that will deduct the record of the corresponding atom to be negative. This completes proves the correctness of our theorem.
D IMPLEMENTATION DETAILS
In this section, we provide specific details on how we implement the STGG framework for our experiments.
Training detail. For all the experiments, we train the Transformer under STGG framework for 100 epochs with batch size of 128 for all the dataset. We use the AdamW (Loshchilov & Hutter, 2019) optimizer with constant learning rate of 10−4. We use three and six Transformer layers for {QM9, ZINC250K} and MOSES, respectively. The rest of Transformer-related configurations follow that of the original work (Vaswani et al., 2017); we use the attention module with embedding size of 1024 with eight heads, MLP with dimension of 2048, and dropout with probability of 0.1. Using a single Quadro RTX 6000 GPU, it takes approximately three, ten, and 96 hours to fully train the models on QM9, ZINC250K, and MOSES datasets, respectively.
Pre-processing. For the whole dataset, we use the following set of atom vocabularies Xatom: { "CH", "CH2", "CH-", "CH2-", "C", "N-", "NH-", "N", "NH", "N+", "NH+", "NH2+", "NH3+", "O-", "O", "O+", "OH+", "F", "P", "PH", "PH2", "P+", "PH+", "S-", "S", "S+", "SH", "SH+", "Cl", "Br", "I", }. Note that we assign different features for the same atom numbers with different number of explicit hydrogens and formal charges. This allows our algorithm to properly allocate maximum valence for each atom feature. Next, we use the bond vocabularyXbond = {"-", "=", "#"}, corresponding to bond orders of single, double, and triple, respectively. For explicit calculation of the atom valence during molecular construction, we train our models on kekulized molecules, i.e., aromatic bonds are fixed to single or double bonds.
E EXAMPLE OF GENERATED MOLECULES
F OFFLINE OPTIMIZATION OF MOLECULES
In this section, we describe our offline molecular optimization algorithm mainly inspired by existing works in offline reinforcement learning (Schmidhuber, 2019; Chen et al., 2021; Janner et al., 2021) and offline model-based optimization (Kumar & Levine, 2020).
For maximizing a reward function defined on a molecule, our algorithm consists of two simple steps. First, our offline optimization algorithm trains a conditional generative model pθ(m|γ) where m is the molecule and γ is the reward function evaluated on the offline dataset of molecules. Next, the reward-conditional generative model samples highly-rewarding molecules by generation conditioned on high values of γ. In particular, we set the value of γ to extrapolate outside the training dataset. Based on the highly expressive power of Transformer architecture, our algorithm can successfully generate highly-rewarding molecules.
G ADDITIONAL EXPERIMENTAL RESULTS ON MOLECULAR OPTIMIZATION
H COMPARISON WITH CG-VAE
In this section, we additionally compare our STGG framework with the CG-VAE model (Liu et al., 2018), which is another atom-by-atom graph generative model that allows masking out the action space to generate molecules satisfying the valence rules. Compared to Table 2, we additionally use the FCD, SNN, Frag, Scaf metrics to measure the faithfulness of the generative models for learning the underlying distribution of molecules. Note that the VALID metric used in Table 2 is insufficient to compare the faithfulness of STGG and CG-VAE since they both enjoy the guarantee to generate molecules satisfying the valence rule.
In Table 6 and 7, one can observe that our algorithm highly outperforms the CG-VAE in terms of faithfully learning the underlying distribution of molecules at the cost of relatively lower UNIQUE. For example, FCD score of our STGG for learning the ZINC dataset is 0.2775 while that of the CG-VAE is 11.33. This highlights the expressive power of our STGG framework. | 1. What is the main contribution of the paper in molecular graph generation?
2. What are the strengths of the proposed approach, particularly in its comprehensive evaluation?
3. What are the weaknesses of the paper regarding its novelty and comparison with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a spanning tree based generative model (STGG) for molecular graphs. STGG sequentially generate a molecule's spanning tree and fill in the residual edges on the way. The spanning tree construction is similar to the standard SMILES representation, but the model operates on a molecular graph rather than a SMILES string. STGG adopts a tree-based transformer with relative positional encoding for tree generation, and a attention-based predictor for residual edge prediction. The method is evaluated on standard ZINC250K, QM9, and MOSES benchmarks and outperform existing baselines.
Review
Strength:
Comprehensive evaluation. The method is evaluated on standard ZINC250K, QM9, MOSES benchmarks and compared with many previous baselines. The model is able to achieve state-of-the-art results on most of the metrics.
Weakness:
As noted in the paper, SMILES strings are also constructed by spanning tree algorithms. As shown in Figure 2, there is very little difference between generating a spanning tree versus generating a SMILES string. The only difference between STGG and SMILES string is the adoption of a tree-based transformer.
STGG is a node-by-node graph generation method following the depth-first order (DFS) of a graph. CG-VAE (Liu et al. 2018) is also a node-by-node graph generator, but following the breadth-first order (BFS). It would be great if authors can compare STGG with CG-VAE given their similarity. |
ICLR | Title
Spanning Tree-based Graph Generation for Molecules
Abstract
In this paper, we explore the problem of generating molecules using deep neural networks, which has recently gained much interest in chemistry. To this end, we propose a spanning tree-based graph generation (STGG) framework based on formulating molecular graph generation as a construction of a spanning tree and the residual edges. Such a formulation exploits the sparsity of molecular graphs and allows using compact tree-constructive operations to define the molecular graph connectivity. Based on the intermediate graph structure of the construction process, our framework can constrain its generation to molecular graphs that satisfy the chemical valence rules. We also newly design a Transformer architecture with tree-based relative positional encodings for realizing the tree construction procedure. Experiments on QM9, ZINC250k, and MOSES benchmarks verify the effectiveness of the proposed framework in metrics such as validity, Fréchet ChemNet distance, and fragment similarity. We also demonstrate the usefulness of STGG in maximizing penalized LogP value of molecules.
1 INTRODUCTION
Researchers have extensively studied graph generative models, dating back to the early works of Erdös Rényi (Erdös et al., 1959). Recently, models based on deep neural networks (DNNs) have gained much attraction due to their expressive power in learning a graph dataset. The molecule-generating DNNs stand out among them for their success in the task of drug discovery.
Recent works have proposed molecule-generating DNNs based on string-based and graph-based representations (Segler et al., 2018; Jin et al., 2018; You et al., 2018; Shi et al., 2020; Jin et al., 2020). For example, Segler et al. (2018) proposed to train language models on the domain-specific linear string representation of molecules, i.e., simplified molecular-input line-entry system (SMILES, Weininger 1988). Since the string-based models ignore the inherent graph structure, recent works explore the graph-based generation that use (a) atom-by-atom (You et al., 2018; Shi et al., 2020; Luo et al., 2021) or (b) substructure-based (Jin et al., 2018; 2019; 2020) operations.
Notably, the substructure-based generative models (Jin et al., 2018; 2019; 2020) successfully exploit the molecular prior knowledge: the graphs are sparsely connected and can be represented as a junction tree with molecular substructure as building blocks. Based on such knowledge, the models use the junction tree construction operators which (a) require a fewer number of steps to generate the whole molecular graph and (b) guarantee generating molecules that satisfy the chemical valence rules. However, despite such advantages, a recent benchmark (Polykovskiy et al., 2020) suggests that they do not outperform the existing methods in terms of learning the data distribution, even when compared with the simple SMILES-based language models. We hypothesize that this is due to the models using a coarse-grained representation of the molecule and they may lack the ability to learn the inner semantics of each substructure-based building block.
Contribution. In this work, we propose a novel framework, coined spanning tree-based graph generation (STGG), for fine-grained generation of molecules while exploiting their sparsity.1 Mainly inspired from the SMILES representation of molecules, our idea is to generate the molecular graph
1While our framework is designed for general sparse graphs, we focus on the molecular graphs in this paper.
as a composition of a spanning tree and the corresponding residual edges with atoms and bonds as building blocks. Such a formulation allows our framework to utilize compact tree-constructive operations to define the molecular graph connectivity. See Figure 1 for an illustration of how we formulate the generation of a molecular graph as a sequence of tree-constructive operations.
Since our framework maintains the molecular graph structure during construction, it can pre-determine decisions that (a) violate the graph construction rule and (b) lead to molecules that violate the chemical valence rule. Such criteria allow control over the generative model to guarantee generating valid molecular graphs by forbidding invalid actions. This is in contrast to prior works (Shi et al., 2020; Luo et al., 2021) that generate the molecular graph atom-by-atom but determines the validity of construction operations through a sample-rejection scheme.
To recognize the spanning tree-based representation used in our STGG framework, we propose a Transformer architecture (Vaswani et al., 2017) with tree-based relative encoding. Inspired by recent works (Villmow et al., 2021; Lukovnikov & Fischer, 2021; Ying et al., 2021) on tree-based and graph-based Transformers, our framework expresses the relative position between two vertices as the number of forward and reverse edges in the shortest path between them. We also introduce an attention-based mechanism for constructing residual edges.
We experiment on popular graph generation benchmarks of QM9, ZINC250K, and MOSES to validate the effectiveness of our algorithm. In the experiments on QM9 and ZINC, our STGG framework outperforms the existing graph-based generative models by a large margin. In the MOSES benchmark, our algorithm achieves superior performance compared to both string-based and graphbased methods for majority of the metrics, e.g., Fréchet ChemNet distance (Preuer et al., 2018) and fragment-based similarity. We also conduct experiments on the offline optimization task for high penalized octanol-water partition coefficient and achieve competitive results.
2 SPANNING TREE-BASED GENERATION OF GRAPHS (STGG)
2.1 OVERVIEW
In this section, we introduce our spanning tree-based graph generation (STGG) framework to sequentially generate a molecule as a composition of a spanning tree and residual edges. To this end, we propose compact tree-constructive operations inspired by the simplified molecular-input line-entry system (SMILES, Weininger, 1988). In contrast to the existing SMILES-based molecular generative methods, our framework (a) allows inferring the intermediate graph structure and (b) is generally applicable to graph types other than molecules. In particular, (a) further enables our framework to control the construction process such that the sequential operations comply with tree-constructive grammar and only generate molecules satisfying the chemical valence rule.
Molecular graph representation. To apply our framework, we represent a molecule as a bipartite graph G = (A,B, E) where A and B are the set of vertices associated with atoms and bonds of the molecule, respectively.2 Each edge {a, b} ∈ E is assigned for each adjacent pair of atom and bond. We assign attributes xa ∈ Xatom and xb ∈ Xbond for vertices a ∈ A and b ∈ B to indicate the corresponding atom type and bond order, respectively. For example, {"C", "N", "O"} ⊆ Xatom and {"-", "="} ⊆ Xbond. See Figure 1 for an example of such a molecular graph representation. Molecular graph from sequence of decisions. To generate the molecular graph G = (A,B, E), our framework makes a sequence of decisions d1, . . . , dT to generate a spanning tree T = (AT ,BT , ET )
2Many existing works, e.g., (Shi et al., 2020), use a non-bipartite graph with bonds assigned to edges.
and a set of residual edges ER = E \ET . At each iteration, seven types of decisions are applicable, i.e., attach atom, attach bond, branch start, branch end, res atom, res bond, and terminate. See Table 1 for examples of decisions and the corresponding operations. We provide a detailed description of the graph construction process in Section 2.2.
Generating valid molecular graphs. Without any control, a model may generate decisions that (a) do not comply with the grammar of STGG or (b) leads to a molecule violating the chemical valence rule. To prevent this scenario, we conduct two criteria for determining validity of the given decision for (a) and (b). We further elaborate this in Section 2.3.
2.2 DECISION PROCESS FOR SPANNING TREE-BASED GRAPH GENERATION
We now explain how our STGG framework incorporates the decisions d1, . . . , dT to build the spanning tree T = (AT ,BT , ET ) and residual edges ER from scratch. To this end, our framework introduces the state information of (a) a pointer vertex ipoint ∈ AT ∪ BT for specifying the target of
Algorithm 1 Tree-based generation of molecular graphs 1: Input: sequence of decisions d1, . . . , dT . 2: Output: graph G = (A,B, E), atom attributes {xa}a∈A, and bond attributes {xb}b∈B 3: Set AT ← ∅, BT ← ∅, ET ← ∅, ER ← ∅, and T ← (AT ,BT , ET ). . Initialize the empty graph. 4: Set Lres as an empty list and Sbranch as an empty stack. 5: for t = 1, . . . , T do 6: if dt ∈ Xatom then . Add a new atom vertex. 7: Create a new atom vertex a and set A ← A∪ {b} and xa ← dt. 8: If |BT | > 0, set ET ← ET ∪ {{a, ipoint}}. . Edge is added when tree is not empty. 9: Set ipoint ← a.
10: if dt ∈ Xbond then . Add a new bond vertex. 11: Create a new bond vertex b and set ET ← ET ∪ {{b, ipoint}}, BT ← BT ∪ {b}, and xb ← dt. 12: Set ipoint ← b. 13: if dt = "*" then Insert ipoint into Lres. . Add pointer vertex into the queue. 14: if dt ∈ Lres then Pop dt from Lres and update ER ← ER ∪ {{ipoint, dt}}. . Add a new residual edge. 15: if dt = "(" then Insert ipoint into Sbranch. . Add pointer vertex into the stack. 16: if dt = ")" then Set ipoint ← pop(Sbranch) . Update pointer vertex from the stack. 17: Set A ← AT , B ← BT , and E ← ET ∪ ER.
the next operation, (b) a stack Sbranch that stores vertices to use later as a starting point of a “branch” in the spanning tree, and (c) a list Lres that stores vertices to use later for constructing residual edges. In what follows, we describe the seven types of operations, i.e., attach atom, attach bond, branch start, branch end, res atom, res bond, and terminate, corresponding to decision values d ∈ Xatom ∪ Xbond ∪ Lres ∪ {"(", ")", "*", "[eos]"} in detail. See Table 1 for the pairs of operations and the corresponding decisions. We also provide an example of the graph construction process in Figure 2.
Attaching atom and bond vertices to the spanning tree. If the decision d specifies one of the atom or bond attributes, i.e., d ∈ Xatom or d ∈ Xbond, it applies the corresponding attach atom and attach bond operations, respectively. To be specific, the attach atom operation adds a new atom vertex a into the spanning tree T as a neighbor of the pointer vertex ipointer, i.e., AT ← AT ∪ {a}, ET ← ET ∪ {a, ipointer}. The value d is set as the new atom attribute, i.e., xa ← d. The newly added vertex is set as the next pointer vertex, i.e., ipointer ← a. The attach bond operation similarly adds a new bond vertex. For example, a line graph can be expressed as a sequence of attach atom and attach bond operations, e.g., C-C-C where "C" ∈ XA and "-" ∈ XB. Branching out the spanning tree. To express graph structures with vertices of degree larger than two, our framework utilizes pairs of the branch start and the branch end operations with decision values of "(" and ")", respectively. To be specific, the branch start operation inserts the current pointer vertex into a stack Sbranch of vertices. Then the branch end operation pops a vertex from the stack Sbranch and sets it as the new pointer vertex. For an example, a graph with one atom vertex of degree three is constructed from a sequence of decisions C-C(-C)(-C).
Adding residual edges. To construct cyclic molecular graphs, our framework generates residual edges based on pairs of res atom operation and res bond operation, corresponding to decision values of "*" and d ∈ Lres, respectively. To be specific, the res atom operation inserts the current (atom) pointer vertex into a list Lres. Next, when the decision value d ∈ Lres is received for the res bond operation, the corresponding vertex d is popped from the list Lres and forms a new residual edge with the current (bond) pointer vertex, i.e., ER ← ER ∪ {{d, ipointer}}. For an example, a cyclic molecular graph is constructed from a sequence of decisions C*-O-O-1, where "1" indicates res bond operation with decision of the first atom vertex with attribute "C".
Termination. The decision "[eos]" applies the terminate operation to finish the construction. We provide the full algorithm in Algorithm 1. We also provide an algorithm to extract a sequence of decisions for constructing a given graph in Appendix A. Such an algorithm is used to obtain sequence of decisions as targets for training the generative model under the STGG framework.
2.3 MASKING OUT INVALID DECISIONS FOR A VALID MOLECULAR GRAPH
Based on Algorithm 1, we develop two criteria for determining whether if a sequence of decisions leads to (a) valid generation of a molecular graph and (b) generation of a molecule satisfying the valence rule. Such criteria are used to mask out invalid decisions to guarantee generating a valid molecular graph.
Validity of graph generation. To determine whether if a sequence of decisions lead to a valid generation of a molecular graph, we propose an algorithm that outputs a set of valid decisions given the previous decision d, stack of pointer vertices Sbranch, and list of atom vertices Lres during execution of Algorithm 1. In what follows, we provide a brief description of grammars enforced by the algorithm. We also provide the detailed algorithm in Appendix B.
• The branch end operation only appears when the stack of pointer vertex Sbranch is non-empty. • The operations res atom and res bond are atom-specific and bond-specific, hence they only
appear when the pointer vertex is located at an atom vertex and a bond vertex, respectively. • All the bond vertices have degree of two, hence branch start and branch end operations only
appear when the pointer vertex is located at an atom vertex. • The stack do not contain duplicates of the a pointer vertex at the same time.
Here, we note that our criteria for valid molecular graph generation do not enforce branches and rings to be closed, e.g., C*=C-C#N is allowed by our criteria. This does not violate the validity since our Algorithm 1 may still define a valid molecular graph by ignoring the open branches and the open rings during construction, i.e., C*=C-C#N generates a molecule identical to that of C=C-C#N.
Validity of satisfying the valence rule. To consider chemical validity of molecules, our framework offers the ability to constrain the its generation on molecules that satisfy the valence rule for each atom. That is, the generated graph G = (A,B, E) satisfies the constraint v(xa) ≥ ∑ b∈N (a) o(xb) for every atom vertex a ∈ A where v(xa) denotes the valence of an atom type xa and o(xb) denotes the bond order. To this end, we keep a record r(a) of available valence for each atom a ∈ A and update them for each decision. For example, when a bond vertex b is newly added, record of the neighboring atom vertex a is updated by r(a) ← v(a) − o(xb). The main idea is to forbid actions that lead to negative values of r(a). We provide a detailed algorithm in Appendix C.
3 TRANSFORMER ARCHITECTURE FOR TREE-BASED GENERATION
In this section, we describe our deep neural network architecture for generating sequence of decisions d1, . . . , dT under the STGG framework. To accurately recognize the decision process, we employ the tree-based relative positional encodings on the intermediate spanning tree T . We also introduce an attention mechanism to express a probability distribution over Lres which depends on intermediate state of the algorithm.
3.1 TREE-BASED POSITIONAL ENCODING FOR MULTI-HEAD ATTENTION LAYERS
Each intermediate layer in our model is a combination of a multi-head self-attention module and a position-wise feed-forward neural network similar to that of Vaswani et al. (2017). The main difference is on how we modify the architecture to incorporate tree-based positional encodings. To be specific, let H = [h>1 , . . . , h > T ] ∈ RT×` denote the input of a self-attention module where d is the hidden dimension and ht ∈ R1×` is the hidden representation at position t. The input H is projected by three matrices WQ ∈ R`×`K ,WK ∈ R`×`K and WV ∈ R`×`V to the corresponding representations Q,K and V , respectively. A single self-attention head is then calculated as
Q = HWQ, K = HWK , V = HWV , (1) A = QK>√ `K + P, Pt1,t2 = z (1) φforward(t1,t2) + z (2) φbackward(t1,t2) + z (3) φseq(t1,t2) , (2)
Attention(H) = SoftMax(M ◦A)V, (3) where Attention(H) is output of the attention head,M is the triangular mask to forbid the model from accessing future information while making a prediction, and ◦ denotes the element-wise multiplication between matrices.
Furthermore, P is the newly introduced relative positional encoding. It is a summation over the trainable embedding vectors z(1), z(2), z(3) indexed by relative position values of φforward(t1, t2), φbackward(t1, t2), and φseq(t1, t2). To be specific, the tree-based relative positions φforward(t1, t2) and φbackward(t1, t2) denotes the number of forward and backward edges in the spanning tree path between pointer vertices at the t1-th and t2-th time step. The direction of edge is decided by order of generation in the STGG framework. Such an encoding was inspired from recent works (Villmow et al., 2021; Lukovnikov & Fischer, 2021; Ying et al., 2021) using Transformers to recognize graphs and trees. Finally, the sequence-based relative position φseq(t1, t2) = t1 − t2 denotes the relative difference of time-steps for the decisions.
3.2 ATTENTION FOR UPDATING RESIDUAL EDGES.
Our model generates a categorical distribution over the space of X = Xatom ∪ Xbond ∪ Lres ∪ {"(", ")", "*", "[eos]"}. It is relatively straight-forward to output an unnormalized probability over values of Xatom ∪ Xbond ∪ {"(", ")", "*"} using a linear classifier on top of the Transformer model. However, it is non-trivial to assign probability values for res bond operation, i.e., decisions values of d ∈ Lres, since Lres varies between different time-steps. To handle this case, we use an attention-based mechanism for assigning unnormalized probability to decision values in the list Lres. To be specific, at the final layer of our model, we obtain the following probability distribution p(d).
p(d) ∝ { mg(d) ·mv(d) · exp(w>d h) ∀d ∈ Xatom ∪ Xbond ∪ {"(", ")", "*"}, mg(d) ·mv(d) · exp ( h>dW1W > 2 h ) ∀d ∈ Lres, (4) where wd ∈ R1×` is a decision-specific vector, W1,W2 ∈ R`× ˜̀ are weight matrices, and h is the decision embedding, i.e., output of the Transformer layer corresponding to the previously made decision. Furthermore, hd is the embedding corresponding to a past decision d ∈ Lres. Finally, mv(d),mg(d) are the mask for excluding invalid decisions that violate the validity of graph generation and valence rule, respectively. The masks are obtained using the criteria explained in Section 2.3. We use the mask during both training and evaluation of the model; this differs from existing graphgenerative models which forbid invalid decisions only at evaluation using a sample-rejection scheme.
4 RELATED WORKS
SMILES-based molecular generative models. Several studies proposed to generate a SMILES representation of molecules using string-based (Gómez-Bombarelli et al., 2016; Segler et al., 2018; Kim et al., 2021) or grammar-based (Kusner et al., 2017; Dai et al., 2018) models. While our STGG is largely inspired from such works, our STGG allows realizing the intermediate graph structure of the molecule being constructed while the SMILES-based models cannot. This difference allows the adoption of structure-aware deep neural networks to STGG. To be specific, the difference between STGG and the SMILES-based models appears from our newly introduced graph construction procedure using a pointer vertex ipoint, a vertex-list L, and a vertex-stack S . They allow recognizing an incomplete sequence of decisions as a graph and assigning positions to each decision. In contrast,
an incomplete SMILES string does not define a graph structure and assigning positions to each character is non-trivial.
Graph-based molecular generative models. Researchers have developed a large variety of molecular graph generation frameworks based on atom-wise and bond-wise operations (You et al., 2018; Kajino, 2019; Popova et al., 2019; Madhawa et al., 2019; Honda et al., 2019; Shi et al., 2020; Zang & Wang, 2020; Luo et al., 2021). Our STGG framework simplifies the decision space of such models by exploiting the tree-like graph structures of molecules. To be specific, STGG requires O(|A|+ |B|) decisions for constructing a molecule while the existing atom-by-atom graph generative models typically requireO(|A|2) decisions. This implies that our generative model requires a smaller number of decisions for sparse graphs like molecules, i.e., when |B| is small. Furthermore, our work is the first to successfully train a Transformer architecture (Vaswani et al., 2017) for graph-based molecule generation.
In another line of research, several works (Jin et al., 2018; 2019; 2020) proposed generative models based on using the junction-tree representation with molecular substructures as building blocks. Based on such a representation, such works utilize tree-constructive operations to generate the full graph. Since they operate on such a coarse-grained molecular representation, they typically require a fewer number of building blocks to generate the whole molecule. In comparison, our STGG framework utilizes a more fine-grained molecular representation and may additionally learn the inner semantics of substructures that are used as building blocks for the junction tree.
5 EXPERIMENT
In this section, we report the experimental results of the proposed spanning tree-based graph generation (STGG) framework. In Section 5.1 and 5.2, we compare with the existing graph generative models in the ZINC250K (Irwin et al., 2012) and QM9 (Ramakrishnan et al., 2014). We provide ablation studies on each component of our method using the ZINC250K dataset. In Section 5.2, we compare with the existing molecule generative models using the MOSES benchmark (Polykovskiy et al., 2020). Finally, in Section 5.3, we provide our results on the molecular optimization task with respect to the penalized octanol-water partition coefficient function (PLOGP). We provide the implementation details and illustration of the generated molecules in Appendix D and E, respectively.
5.1 MOLECULE GENERATION ON ZINC250K AND QM9 DATASETS
We first compare to the literature standard for the molecular generation task in the ZINC250K and the QM9 datasets. To this end, we train our generative model on the respective datasets and sample 10,000 molecules to measure (a) the ratio of valid molecules (VALID), (b) the ratio of unique molecules (UNIQUE), and (c) the ratio of novel molecules with respect to the training dataset (NOVEL). We compare with the numbers reported by recently proposed graph generative models (Shi et al., 2020; Luo et al., 2021). We also provide an additional baseline of a transformer architecture trained to generate the SMILES representation for the molecule (SMILES-TRANSFORMER).
We mark CORRECTABLE for methods which can optionally use a sample-rejection scheme to forbid decisions that violate the chemical rules at evaluation. Note that our framework can train the generative model under the valence correction mask during training, while existing graph generative
models use the valence correction only at evaluation. However, for comparison, we do not use the valence correction mask during training in this experiment.
We report the experimental results in Table 2. In the table, we observe that our STGG framework outperforms all the existing molecular graph generative models for high VALID at the cost of relatively lower NOVEL. In particular, our generative model can achieve a 100% ratio of valid molecules in the QM9 dataset even without any correction procedure. Such a result highlights the how our model can effectively learn the chemical rules and model the underlying distribution.
Finally, our STGG framework performing better than the SMILES-based transformer implies how the performance of our generative model stems from the STGG framework, rather than using the Transformer architecture.
Ablation studies. We also conduct ablation studies on the ZINC250k dataset to verify the effectiveness of our method. To this end, we report the experimental results of our method without specific components. To be specific, we ablate the effects of using sequential relative positional encoding (S), treebased relative positional encoding (T), graph-construction mask (G), and valence rule mask (V). We also consider an additional baseline of using the absolute positional encoding (A) as in the original Transformer architecture (Vaswani et al., 2017). In Table 3 and Figure 4, one can observe how each component of
our algorithm is crucial for achieving high VALID. In particular, the tree encoding is essential for the performance, showing the importance of tree-based representation that we use in our model.
5.2 MOLECULE GENERATION ON THE MOSES BENCHMARK
We also compare our method on the MOSES benchmark with the existing models. The MOSES benchmark offers a large collection of metrics to access the overall quality of generated molecules. To be specific, in addition to VALID, UNIQUE, NOVEL, we consider internal diversity of molecules (INTDIV), ratio of samples being accepted to chemical filters (FILTERS), Frétchet ChemNet Distance (FCD), nearest neighborhood similarity (SNN), frament similarity (FRAG), and Scaffold similarity
(SCAF). The similarity metrics of FCD, SNN, FRAG, SCAF are measured with respect to the test dataset of molecules and the scaffolds extracted from them.
In Table 4 and 5, we provide our experimental result. Here, one can observe how our algorithm outperforms the existing works for 10 out of 15 metrics including FILTERS, FCD-TEST, FCD-TESTSF, SNN-TEST, SNN-TESTSF, FRAG-TEST, FRAG-TESTSF, and SCAF-TEST. This highlights the ability of our STGG framework to successfully learn the training distribution.
5.3 MOLECULAR OPTIMIZATION FOR PENALIZED OCTANOL-WATER PARTITION COEFFICIENT
Finally, we demonstrate the usefulness of our STGG framework for the task of molecular optimization. To this end, we consider the literature standard of maximizing the penalized octanol-water partition coefficient (PLOGP). However, several works (Gao & Coley, 2020; Coley, 2020) have noted how the existing algorithms on this benchmark may not be practical, since PLOGP is ill-defined as a scoring function for molecules; this scoring function may assign high values to “unrealistic” molecules that are unstable and hard to synthesize in practice.
To consider this aspect, we propose a new algorithm which can control the quality of molecules by trading off scores and realistic-ness of molecules. Using this algorithm, we demonstrate how our STGG is capable of generating both (a) high-scoring molecules and (b) realistic molecules with a reasonably high score. At a high-level, we train a conditional generative model pθ(m|γ) under the STGG framework with PLOGP as the condition γ. At the test time, we sample from a high value γ to obtain high-scoring molecules. Such an algorithm is inspired from the recent offline reinforcement learning algorithms (Schmidhuber, 2019; Kumar & Levine, 2020; Chen et al., 2021; Janner et al., 2021). We fully describe our molecular optimization algorithm in Appendix F.
In Table 5 and Figure 6, we report the result of our molecular optimization experiment. We provide additional illustrations of the generated molecules in Appendix G. Here, our STGG model is able to generate molecules with considerably high PLOGP scores outside the training distribution. Furthermore, in Figure 6 and Appendix G, one can observe how increasing γ gradually changes the optimized molecule from realistic structures to large, chain-like, and unrealistic structures.3 Given such results, one may conclude that our STGG combined with the offline optimization algorithm can successfully make a trade-off between high PLOGP and realistic-ness of the generated molecules. However, we also remark that our results do not imply our optimization results to be strictly better than the baselines; we believe it is necessary to develop and incorporate quantitative measures for realistic-ness of molecules to fairly evaluate the molecular optimization algorithms. We believe such a research to be a important future direction.
6 CONCLUSION
In this paper, we propose STGG which is the first spanning tree-based framework for the generation of molecules using the Transformer architecture. The key idea of using the spanning tree for graph generation applies to any graph type outside the molecules; we believe such an extension of our work to be both promising and interesting. We also propose an offline algorithm for molecular optimization which allows the trade-off between the high score and the realistic-ness of molecules. We leave more investigation of the newly proposed optimization algorithm as future work.
3This is in agreement with prior works (Shi et al., 2020; Ahn et al., 2020; Luo et al., 2021).
7 REPRODUCIBILITY STATEMENT
We provide explicit description of our algorithm in Algorithm 1, Appendix 2, 3, and 4. We list the hyper-parameters, the hardware used for the experiments, and the data-processing information in Appendix D. We provide illustrations of the molecules generated for the experiments in Figure 6, and Appendix E, G. We submit the full implementation of our STGG framework and the baselines used in our experiments as a supplementary material.
A EXTRACTING SEQUENCE OF DECISIONS FROM A MOLECULAR GRAPH
In this section, we explain our algorithm for finding a sequence of decisions to construct a given molecular graph G = (A,B, E). The high-level idea is to first perform a depth-first search on G to find a spanning tree T = (A,B, ET ) and the corresponding set of residual edges ER = E\ET . Then the algorithm traverses the spanning tree T according to the depth-first search tree while (a) allocating branch start and branch end for vertices with degree higher than two and (b) adding res atom and res bond operations for any vertex covered by a residual edge {a, b} ∈ ER. To this end, we utilize a stack Sdfs that stores the list of vertices in G and branching tokens {"(", ")"} to visit. At each iteration, an element i of the stack Sdfs is popped. If i is a vertex, the algorithm adds the corresponding decision for attach atom and attach bond operations. If the vertex has more than two successors with respect to the spanning tree T , the successors are inserted into the stack Sdfs with surrounding "(" and ")" tokens. If the vertex has only one successor, the successor is inserted into the stack without an additional operation. When the branching tokens {"(", ")"} are popped from the stack, the algorithm adds the corresponding decision value to the sequence of decisions. We describe the full scheme in Algorithm 2.
Algorithm 2 Generating sequence of decisions for a molecular graphs 1: Input: graph G = (A,B, E), atom attributes {xa}a∈A, and bond attributes {xb}b∈B 2: Find a spanning tree T = (A,B, ET ) of G based on depth-first order and set ER ← E \ ET . 3: Initialize an empty sequence of decisions D. 4: Choose the root a ∈ A of T to insert in an empty stack Sbranch. 5: do 6: Pop i from Sbranch. 7: if i ∈ A ∪ B then 8: Append xi to D. . Decision to attach the atom vertex. 9: for j ∈ {j|j ∈ N (i), {i, j} ∈ ER} do . Decisions for residual edges. 10: If i ∈ A, append "*" to D. 11: If i ∈ B, append j to D. 12: Let V denote {j|j ∈ N (i), j 6∈ AT ∪ BT }. . Successors of i in depth-first order. 13: If |V| > 1, insert "(" to Sbranch. . Allocate decision to record pointer vertex. 14: Insert vertices in V to Sbranch. . Allocate successors to visit later. 15: If |V| > 1, insert ")" to Sbranch. . Allocate decision to return to the pointer vertex. 16: if i ∈ {"(", ")"} then 17: Append i to D. 18: while |Sbranch| > 0 19: Output: sequence of decisions D = d1, . . . , dT to reconstruct G.
B ALGORITHMS FOR GRAPH MASKING
To determine whether if a sequence of decisions lead to a valid generation of a molecular graph, we propose an algorithm that outputs a set of valid decisions given the current decision d, stack of pointer vertices S , and list of atom vertices L during execution of Algorithhm 1. We provide the full description in Algorithm 3.
Algorithm 3 Determination of grammar violation 1: Input: current decision d, stack Sbranch, and list Lres. 2: Output: List of candidate decisions D that are valid. 3: if d ∈ Xatom then 4: Set D ← Xbond. . When the atom vertex is followed by bond vertex. 5: Set D ← D ∪ {"("} . When the atom vertex has more than one successors. 6: Set D ← D ∪ {"*"} . When the atom vertex has a neighboring residual edge. 7: If |Sbranch| > 0, set D ← D ∪ {")"}. . The ")" decision appears only when Sbranch is non-empty 8: Set D ← D ∪ {"[eos]"}. . Allow termination. 9: if d ∈ Xbond then 10: Set D ← Xatom. . When the bond vertex is followed by atom vertex. 11: Set D ← D ∪ Lres . When the bond vertex has a neighboring residual edge. 12: if d = "*" then 13: Set D ← Xbond. . When the atom vertex is followed by bond vertex. 14: Set D ← D ∪ {"("} . When the atom vertex has more than one successors. 15: Set D ← D ∪ {"*"} . When the atom vertex has a neighboring residual edge. 16: If |Sbranch| > 0, set D ← D ∪ {")"}. . The ")" decision appears only when Sbranch is non-empty 17: Set D ← D ∪ {"[eos]"}. . Allow termination. 18: if d ∈ Lres then 19: Set D ← {)}. . Residual edge is only constructed at end of each branch. 20: if d = "(" then 21: Set D ← Xbond. . Branch always starts with a bond vertex. 22: Set D ← D ∪ {"[eos]"}. . Allow termination. 23: if d = ")" then 24: Set D ← Xbond ∪ {"(", ")"}. . Branch is followed by start or end of another branch. 25: Set D ← D ∪ {"[eos]"}. . Allow termination.
We also establish theoretical result on how the sequence of decisions generated from Algorithm 3 is always a valid sequence of decisions for Algorithm 1. To this end, we define a valid molecular graph as follows.
Definition 1. A valid molecular graph G = (A,B, E) is a connected bipartite graph where the number of vertices adjacent to any bond vertex b ∈ B is exactly two, i.e., |N (b)| = 2.
Such a definition implies how a molecule should have exactly two atoms connected to a bond. Combined with additional conditions to guarantee the well-behavior of Algorithm 1 on sequence of decisions, we obtain the following result.
Theorem 1. Let G = (A,B, E), S , and L be a graph, a stack of vertices, and a list of vertices being updated by Algorithm 1 and a sequence of decisions d1, . . . , dT . If the sequence of decisions satisfies the criteria defined by Algorithm 3, the following properties are satisfied.
P1 At the t-th step of Algorithm 1, |S| > 0 if dt = ")".
P2 At the t-th step of Algorithm 1, dt ∈ L if dt ∈ A ∪ B.
P3 When dT = [eos], the graph G is a valid molecular graph.
Here, P1 and P2 implies how the operations in Algorithm 1 are well-defined for d1, . . . , dT .
Proof. First, P1 is enforced by the step in Algorithm 3 which forbids the decision value of ")" when the stack S is empty. Next, P2 is enforced by the step selecting decision values from the current list of vertices L. To enforce P3, when dT = [eos], G (a) has to be a connected bipartite graph and (b) the number of vertices adjacent to any bond vertex has to be exactly two. For (a), Algorithm 3 allows the decision of dt ∈ Xatom ∪ L only when the pointer vertex is a bond vertex, i.e., d ∈ Xbond. Similarly, dt ∈ Xbond is allowed only when d ∈ Xatom ∪ {"*", "("}. For (b), the algorithm does not allow adding a bond vertex b ∈ B to the list of vertices L, which is required for any vertex with degree higher than two. Termination is not allowed when there exists a bond vertex with degree smaller than two.
C ALGORITHM FOR VALENCE MASKING
To consider chemical validity of molecules, our framework offers the ability to constrain its generation on molecules that satisfy the valence rule for each atom. That is, the generated graph G = (A,B, E) satisfies the constraint v(xa) ≥ ∑ b∈N (a) o(xb) for every atom vertex a ∈ A where v(xa) denotes the valence of an atom type xa and o(xb) denotes the bond order.
To this end, we propose an algorithm which iteratively updates a record r(a) of available valence for each atom vertex a ∈ A. The key idea is to (a) update the record accordingly for each addition of atom and bond orders and (b) pre-allocate valence for the branch start and res atom operations by the amount of minimum bond order minx∈Xbond o(x). The second part (b) is required since the branch start and res atom operations indicate future bond vertices to be added as a neighbor of the current atom vertex. We provide the full description in Algorithm 4.
Algorithm 4 Determination of valence rule violation
1: Input: Intermediate tree T = (AT ,BT , ET ), current pointer vertex ipoint, previous pointer vertex ĩpoint, current decision d, previous decision d̃, and record r(·) of available valence.
2: Output: Newly updated r and the list D of decisions that violates the valence rule. 3: if d ∈ Xatom then 4: Set r(ipoint)← v(d). . Initialize record by atom valence. 5: Set r(ipoint)← r(ipoint)− o(d̃). . Update record using previously added bond vertex. 6: Set D ← {x|x ∈ Xbond, o(x) > r(ipoint)}. . Reject bond orders higher than the record. 7: if r(ipoint) < minx∈Xbond o(x) then 8: Set D ← D ∪ {"(", "*"}. . Reject decisions requiring minimal amount of valence. 9: if d ∈ Xbond then 10: if d̃ 6= "(" then 11: Set r(̃ipoint)← r(̃ipoint)− o(d). . Update record of previously added atom vertex. 12: else 13: Set r(̃ipoint)← r(̃ipoint)− o(d) + minx∈Xbond o(x).
. Update previously added atom vertex considering pre-allocated valence.
14: Set D ← {x|x ∈ Xatom, v(x) < r(ipoint))}. . Reject atom valence lower than bond order. 15: Set D ← D ∪ {x|x ∈ Lres, r(x) < o(d)−minx∈Xbond o(x)}.
. Reject residual edge candidates with valence lower than bond order.
16: if d = "(" then 17: Set r(ipoint)← r(ipoint)−minx∈Xbond o(x). . Pre-allocate minimum bond order. 18: Set D ← {x|x ∈ Xbond, o(x) > r(ipoint)}. . Reject bond orders higher than the record. 19: if d = ")" then 20: Set D ← ∅ 21: if d = "*" then 22: Set r(ipoint)← r(ipoint)−minx∈Xbond o(x). . Pre-allocate minimum bond order. 23: Set D ← {x|x ∈ Xbond, o(x) > r(ipoint)}. . Reject bond orders higher than the record. 24: if d ∈ Lres then 25: Set r(d)← r(d) + minx∈Xbond o(x)− o(xipoint). . Update record of previously added atom vertex. 26: Set D ← ∅.
Given Algorithm 3 and 4, we establish the following theoretical guarantee. Definition 2. A valid molecular graph G = (A,B, E) satisfies the valency rule if v(xa) ≥∑ b∈N (a) o(xb) for every atom vertex a ∈ A.
Theorem 2. Let G = (A,B, E) be a graph being updated by Algorithm 1 and a sequence of decisions d1, . . . , dT . If the sequence of decisions satisfies the criteria defined by Algorithm 3 and 4, the corresponding graph G satisfies the valency rule.
Proof. To prove the validity of our algorithms, we show that (1) r(a) ≤ v(xa)− ∑ b∈N (a) o(xb) and (2) r(a) > 0 at any time-step applying Algorithm 4 to the sequence of decisions d1, . . . , dT .
For (1), we note that a record of an atom a is initialized as v(xa)− ∑ b∈N (a) o(xb) whenever it is newly added to the graph G by a decision. Furthermore, whenever a new edge is added by attach bond and res bond operation, the corresponding bond order o(xb) is deducted from the record. Importantly, a minimum bond order minx∈Xbond o(x) is also added to the record for a attach bond operation consecutive to a branch start operation or a res bond operation. We note how this does not harm (1) since the minimum bond order has already been deducted (or pre-allocated) by the corresponding branch start and res atom operations, respectively.
For the case of (2), one can observe that Algorithm 4 filters out the atoms and bonds that will deduct the record of the corresponding atom to be negative. This completes proves the correctness of our theorem.
D IMPLEMENTATION DETAILS
In this section, we provide specific details on how we implement the STGG framework for our experiments.
Training detail. For all the experiments, we train the Transformer under STGG framework for 100 epochs with batch size of 128 for all the dataset. We use the AdamW (Loshchilov & Hutter, 2019) optimizer with constant learning rate of 10−4. We use three and six Transformer layers for {QM9, ZINC250K} and MOSES, respectively. The rest of Transformer-related configurations follow that of the original work (Vaswani et al., 2017); we use the attention module with embedding size of 1024 with eight heads, MLP with dimension of 2048, and dropout with probability of 0.1. Using a single Quadro RTX 6000 GPU, it takes approximately three, ten, and 96 hours to fully train the models on QM9, ZINC250K, and MOSES datasets, respectively.
Pre-processing. For the whole dataset, we use the following set of atom vocabularies Xatom: { "CH", "CH2", "CH-", "CH2-", "C", "N-", "NH-", "N", "NH", "N+", "NH+", "NH2+", "NH3+", "O-", "O", "O+", "OH+", "F", "P", "PH", "PH2", "P+", "PH+", "S-", "S", "S+", "SH", "SH+", "Cl", "Br", "I", }. Note that we assign different features for the same atom numbers with different number of explicit hydrogens and formal charges. This allows our algorithm to properly allocate maximum valence for each atom feature. Next, we use the bond vocabularyXbond = {"-", "=", "#"}, corresponding to bond orders of single, double, and triple, respectively. For explicit calculation of the atom valence during molecular construction, we train our models on kekulized molecules, i.e., aromatic bonds are fixed to single or double bonds.
E EXAMPLE OF GENERATED MOLECULES
F OFFLINE OPTIMIZATION OF MOLECULES
In this section, we describe our offline molecular optimization algorithm mainly inspired by existing works in offline reinforcement learning (Schmidhuber, 2019; Chen et al., 2021; Janner et al., 2021) and offline model-based optimization (Kumar & Levine, 2020).
For maximizing a reward function defined on a molecule, our algorithm consists of two simple steps. First, our offline optimization algorithm trains a conditional generative model pθ(m|γ) where m is the molecule and γ is the reward function evaluated on the offline dataset of molecules. Next, the reward-conditional generative model samples highly-rewarding molecules by generation conditioned on high values of γ. In particular, we set the value of γ to extrapolate outside the training dataset. Based on the highly expressive power of Transformer architecture, our algorithm can successfully generate highly-rewarding molecules.
G ADDITIONAL EXPERIMENTAL RESULTS ON MOLECULAR OPTIMIZATION
H COMPARISON WITH CG-VAE
In this section, we additionally compare our STGG framework with the CG-VAE model (Liu et al., 2018), which is another atom-by-atom graph generative model that allows masking out the action space to generate molecules satisfying the valence rules. Compared to Table 2, we additionally use the FCD, SNN, Frag, Scaf metrics to measure the faithfulness of the generative models for learning the underlying distribution of molecules. Note that the VALID metric used in Table 2 is insufficient to compare the faithfulness of STGG and CG-VAE since they both enjoy the guarantee to generate molecules satisfying the valence rule.
In Table 6 and 7, one can observe that our algorithm highly outperforms the CG-VAE in terms of faithfully learning the underlying distribution of molecules at the cost of relatively lower UNIQUE. For example, FCD score of our STGG for learning the ZINC dataset is 0.2775 while that of the CG-VAE is 11.33. This highlights the expressive power of our STGG framework. | 1. What is the focus and contribution of the paper on molecular graph generation?
2. What are the strengths of the proposed approach, particularly in its novelty and ability to infer intermediate graph structures?
3. What are the weaknesses or concerns regarding the method, especially in comparison to atom-by-atom graph generative models?
4. Are there any questions about the training data used for the proposed method? | Summary Of The Paper
Review | Summary Of The Paper
The paper proposes a novel spanning tree-based graph generation (STGG) framework. The key idea is to represent the molecular graph as a sequence of decisions according to a novel spanning tree-based grammar, and then model these decision sequences using the transformer.
To accommodate such a novel algorithm, several interesting techniques are involved, e.g., representing molecules as bipartite graphs, using tree-based positional encoding for transformer. The authors claim that the proposed method allows generating valid molecular structures and inferring the intermediate graph structure.
Review
Strengths
The proposed method is quite novel. The key idea is to represent the molecular graph as a sequence of decisions according to a novel spanning tree-based grammar. Compared to SMILES-based molecular generative methods, STGG allows inferring the intermediate graph structures and takes the graph structure into consideration. The method is general and can be applied to other graph structures.
The paper is well written. The experiments of the work are comprehensive and well designed. In particular, for PLOGP molecular optimization, they adopt an offline optimization algorithm, which is interesting to me.
The code is already provided in the Supplementary Material
Weaknesses or Questions
What is the main advantage of STGG over atom-by-atom graph generative models? The Table2 indicates that STGG tend to generate molecules with low novelty and the main advantage is validity. The authors mention that atom-by-atom methods determine the validity via sample-reject. But actually, we can also mask out invalid decisions for atom-by-atom methods by keeping a record of valency and bond information.
Concerns on training data: To get the training data, the authors use algorithm2 presented in Appendix A to obtain the sequence of decisions from graph structures. For each graph structure, how many sequences do you get? Do you fix the random seed of this algorithm, or generate the sequence on the fly in each batch with different randomness? My key point is that if the training sequence is generated on the fly for each batch with random DFS, the training data for SMILES-TRANSFORMER should also be augmented (instead of using canonical smiles) for a fair comparison in Table 2. |
ICLR | Title
Spanning Tree-based Graph Generation for Molecules
Abstract
In this paper, we explore the problem of generating molecules using deep neural networks, which has recently gained much interest in chemistry. To this end, we propose a spanning tree-based graph generation (STGG) framework based on formulating molecular graph generation as a construction of a spanning tree and the residual edges. Such a formulation exploits the sparsity of molecular graphs and allows using compact tree-constructive operations to define the molecular graph connectivity. Based on the intermediate graph structure of the construction process, our framework can constrain its generation to molecular graphs that satisfy the chemical valence rules. We also newly design a Transformer architecture with tree-based relative positional encodings for realizing the tree construction procedure. Experiments on QM9, ZINC250k, and MOSES benchmarks verify the effectiveness of the proposed framework in metrics such as validity, Fréchet ChemNet distance, and fragment similarity. We also demonstrate the usefulness of STGG in maximizing penalized LogP value of molecules.
1 INTRODUCTION
Researchers have extensively studied graph generative models, dating back to the early works of Erdös Rényi (Erdös et al., 1959). Recently, models based on deep neural networks (DNNs) have gained much attraction due to their expressive power in learning a graph dataset. The molecule-generating DNNs stand out among them for their success in the task of drug discovery.
Recent works have proposed molecule-generating DNNs based on string-based and graph-based representations (Segler et al., 2018; Jin et al., 2018; You et al., 2018; Shi et al., 2020; Jin et al., 2020). For example, Segler et al. (2018) proposed to train language models on the domain-specific linear string representation of molecules, i.e., simplified molecular-input line-entry system (SMILES, Weininger 1988). Since the string-based models ignore the inherent graph structure, recent works explore the graph-based generation that use (a) atom-by-atom (You et al., 2018; Shi et al., 2020; Luo et al., 2021) or (b) substructure-based (Jin et al., 2018; 2019; 2020) operations.
Notably, the substructure-based generative models (Jin et al., 2018; 2019; 2020) successfully exploit the molecular prior knowledge: the graphs are sparsely connected and can be represented as a junction tree with molecular substructure as building blocks. Based on such knowledge, the models use the junction tree construction operators which (a) require a fewer number of steps to generate the whole molecular graph and (b) guarantee generating molecules that satisfy the chemical valence rules. However, despite such advantages, a recent benchmark (Polykovskiy et al., 2020) suggests that they do not outperform the existing methods in terms of learning the data distribution, even when compared with the simple SMILES-based language models. We hypothesize that this is due to the models using a coarse-grained representation of the molecule and they may lack the ability to learn the inner semantics of each substructure-based building block.
Contribution. In this work, we propose a novel framework, coined spanning tree-based graph generation (STGG), for fine-grained generation of molecules while exploiting their sparsity.1 Mainly inspired from the SMILES representation of molecules, our idea is to generate the molecular graph
1While our framework is designed for general sparse graphs, we focus on the molecular graphs in this paper.
as a composition of a spanning tree and the corresponding residual edges with atoms and bonds as building blocks. Such a formulation allows our framework to utilize compact tree-constructive operations to define the molecular graph connectivity. See Figure 1 for an illustration of how we formulate the generation of a molecular graph as a sequence of tree-constructive operations.
Since our framework maintains the molecular graph structure during construction, it can pre-determine decisions that (a) violate the graph construction rule and (b) lead to molecules that violate the chemical valence rule. Such criteria allow control over the generative model to guarantee generating valid molecular graphs by forbidding invalid actions. This is in contrast to prior works (Shi et al., 2020; Luo et al., 2021) that generate the molecular graph atom-by-atom but determines the validity of construction operations through a sample-rejection scheme.
To recognize the spanning tree-based representation used in our STGG framework, we propose a Transformer architecture (Vaswani et al., 2017) with tree-based relative encoding. Inspired by recent works (Villmow et al., 2021; Lukovnikov & Fischer, 2021; Ying et al., 2021) on tree-based and graph-based Transformers, our framework expresses the relative position between two vertices as the number of forward and reverse edges in the shortest path between them. We also introduce an attention-based mechanism for constructing residual edges.
We experiment on popular graph generation benchmarks of QM9, ZINC250K, and MOSES to validate the effectiveness of our algorithm. In the experiments on QM9 and ZINC, our STGG framework outperforms the existing graph-based generative models by a large margin. In the MOSES benchmark, our algorithm achieves superior performance compared to both string-based and graphbased methods for majority of the metrics, e.g., Fréchet ChemNet distance (Preuer et al., 2018) and fragment-based similarity. We also conduct experiments on the offline optimization task for high penalized octanol-water partition coefficient and achieve competitive results.
2 SPANNING TREE-BASED GENERATION OF GRAPHS (STGG)
2.1 OVERVIEW
In this section, we introduce our spanning tree-based graph generation (STGG) framework to sequentially generate a molecule as a composition of a spanning tree and residual edges. To this end, we propose compact tree-constructive operations inspired by the simplified molecular-input line-entry system (SMILES, Weininger, 1988). In contrast to the existing SMILES-based molecular generative methods, our framework (a) allows inferring the intermediate graph structure and (b) is generally applicable to graph types other than molecules. In particular, (a) further enables our framework to control the construction process such that the sequential operations comply with tree-constructive grammar and only generate molecules satisfying the chemical valence rule.
Molecular graph representation. To apply our framework, we represent a molecule as a bipartite graph G = (A,B, E) where A and B are the set of vertices associated with atoms and bonds of the molecule, respectively.2 Each edge {a, b} ∈ E is assigned for each adjacent pair of atom and bond. We assign attributes xa ∈ Xatom and xb ∈ Xbond for vertices a ∈ A and b ∈ B to indicate the corresponding atom type and bond order, respectively. For example, {"C", "N", "O"} ⊆ Xatom and {"-", "="} ⊆ Xbond. See Figure 1 for an example of such a molecular graph representation. Molecular graph from sequence of decisions. To generate the molecular graph G = (A,B, E), our framework makes a sequence of decisions d1, . . . , dT to generate a spanning tree T = (AT ,BT , ET )
2Many existing works, e.g., (Shi et al., 2020), use a non-bipartite graph with bonds assigned to edges.
and a set of residual edges ER = E \ET . At each iteration, seven types of decisions are applicable, i.e., attach atom, attach bond, branch start, branch end, res atom, res bond, and terminate. See Table 1 for examples of decisions and the corresponding operations. We provide a detailed description of the graph construction process in Section 2.2.
Generating valid molecular graphs. Without any control, a model may generate decisions that (a) do not comply with the grammar of STGG or (b) leads to a molecule violating the chemical valence rule. To prevent this scenario, we conduct two criteria for determining validity of the given decision for (a) and (b). We further elaborate this in Section 2.3.
2.2 DECISION PROCESS FOR SPANNING TREE-BASED GRAPH GENERATION
We now explain how our STGG framework incorporates the decisions d1, . . . , dT to build the spanning tree T = (AT ,BT , ET ) and residual edges ER from scratch. To this end, our framework introduces the state information of (a) a pointer vertex ipoint ∈ AT ∪ BT for specifying the target of
Algorithm 1 Tree-based generation of molecular graphs 1: Input: sequence of decisions d1, . . . , dT . 2: Output: graph G = (A,B, E), atom attributes {xa}a∈A, and bond attributes {xb}b∈B 3: Set AT ← ∅, BT ← ∅, ET ← ∅, ER ← ∅, and T ← (AT ,BT , ET ). . Initialize the empty graph. 4: Set Lres as an empty list and Sbranch as an empty stack. 5: for t = 1, . . . , T do 6: if dt ∈ Xatom then . Add a new atom vertex. 7: Create a new atom vertex a and set A ← A∪ {b} and xa ← dt. 8: If |BT | > 0, set ET ← ET ∪ {{a, ipoint}}. . Edge is added when tree is not empty. 9: Set ipoint ← a.
10: if dt ∈ Xbond then . Add a new bond vertex. 11: Create a new bond vertex b and set ET ← ET ∪ {{b, ipoint}}, BT ← BT ∪ {b}, and xb ← dt. 12: Set ipoint ← b. 13: if dt = "*" then Insert ipoint into Lres. . Add pointer vertex into the queue. 14: if dt ∈ Lres then Pop dt from Lres and update ER ← ER ∪ {{ipoint, dt}}. . Add a new residual edge. 15: if dt = "(" then Insert ipoint into Sbranch. . Add pointer vertex into the stack. 16: if dt = ")" then Set ipoint ← pop(Sbranch) . Update pointer vertex from the stack. 17: Set A ← AT , B ← BT , and E ← ET ∪ ER.
the next operation, (b) a stack Sbranch that stores vertices to use later as a starting point of a “branch” in the spanning tree, and (c) a list Lres that stores vertices to use later for constructing residual edges. In what follows, we describe the seven types of operations, i.e., attach atom, attach bond, branch start, branch end, res atom, res bond, and terminate, corresponding to decision values d ∈ Xatom ∪ Xbond ∪ Lres ∪ {"(", ")", "*", "[eos]"} in detail. See Table 1 for the pairs of operations and the corresponding decisions. We also provide an example of the graph construction process in Figure 2.
Attaching atom and bond vertices to the spanning tree. If the decision d specifies one of the atom or bond attributes, i.e., d ∈ Xatom or d ∈ Xbond, it applies the corresponding attach atom and attach bond operations, respectively. To be specific, the attach atom operation adds a new atom vertex a into the spanning tree T as a neighbor of the pointer vertex ipointer, i.e., AT ← AT ∪ {a}, ET ← ET ∪ {a, ipointer}. The value d is set as the new atom attribute, i.e., xa ← d. The newly added vertex is set as the next pointer vertex, i.e., ipointer ← a. The attach bond operation similarly adds a new bond vertex. For example, a line graph can be expressed as a sequence of attach atom and attach bond operations, e.g., C-C-C where "C" ∈ XA and "-" ∈ XB. Branching out the spanning tree. To express graph structures with vertices of degree larger than two, our framework utilizes pairs of the branch start and the branch end operations with decision values of "(" and ")", respectively. To be specific, the branch start operation inserts the current pointer vertex into a stack Sbranch of vertices. Then the branch end operation pops a vertex from the stack Sbranch and sets it as the new pointer vertex. For an example, a graph with one atom vertex of degree three is constructed from a sequence of decisions C-C(-C)(-C).
Adding residual edges. To construct cyclic molecular graphs, our framework generates residual edges based on pairs of res atom operation and res bond operation, corresponding to decision values of "*" and d ∈ Lres, respectively. To be specific, the res atom operation inserts the current (atom) pointer vertex into a list Lres. Next, when the decision value d ∈ Lres is received for the res bond operation, the corresponding vertex d is popped from the list Lres and forms a new residual edge with the current (bond) pointer vertex, i.e., ER ← ER ∪ {{d, ipointer}}. For an example, a cyclic molecular graph is constructed from a sequence of decisions C*-O-O-1, where "1" indicates res bond operation with decision of the first atom vertex with attribute "C".
Termination. The decision "[eos]" applies the terminate operation to finish the construction. We provide the full algorithm in Algorithm 1. We also provide an algorithm to extract a sequence of decisions for constructing a given graph in Appendix A. Such an algorithm is used to obtain sequence of decisions as targets for training the generative model under the STGG framework.
2.3 MASKING OUT INVALID DECISIONS FOR A VALID MOLECULAR GRAPH
Based on Algorithm 1, we develop two criteria for determining whether if a sequence of decisions leads to (a) valid generation of a molecular graph and (b) generation of a molecule satisfying the valence rule. Such criteria are used to mask out invalid decisions to guarantee generating a valid molecular graph.
Validity of graph generation. To determine whether if a sequence of decisions lead to a valid generation of a molecular graph, we propose an algorithm that outputs a set of valid decisions given the previous decision d, stack of pointer vertices Sbranch, and list of atom vertices Lres during execution of Algorithm 1. In what follows, we provide a brief description of grammars enforced by the algorithm. We also provide the detailed algorithm in Appendix B.
• The branch end operation only appears when the stack of pointer vertex Sbranch is non-empty. • The operations res atom and res bond are atom-specific and bond-specific, hence they only
appear when the pointer vertex is located at an atom vertex and a bond vertex, respectively. • All the bond vertices have degree of two, hence branch start and branch end operations only
appear when the pointer vertex is located at an atom vertex. • The stack do not contain duplicates of the a pointer vertex at the same time.
Here, we note that our criteria for valid molecular graph generation do not enforce branches and rings to be closed, e.g., C*=C-C#N is allowed by our criteria. This does not violate the validity since our Algorithm 1 may still define a valid molecular graph by ignoring the open branches and the open rings during construction, i.e., C*=C-C#N generates a molecule identical to that of C=C-C#N.
Validity of satisfying the valence rule. To consider chemical validity of molecules, our framework offers the ability to constrain the its generation on molecules that satisfy the valence rule for each atom. That is, the generated graph G = (A,B, E) satisfies the constraint v(xa) ≥ ∑ b∈N (a) o(xb) for every atom vertex a ∈ A where v(xa) denotes the valence of an atom type xa and o(xb) denotes the bond order. To this end, we keep a record r(a) of available valence for each atom a ∈ A and update them for each decision. For example, when a bond vertex b is newly added, record of the neighboring atom vertex a is updated by r(a) ← v(a) − o(xb). The main idea is to forbid actions that lead to negative values of r(a). We provide a detailed algorithm in Appendix C.
3 TRANSFORMER ARCHITECTURE FOR TREE-BASED GENERATION
In this section, we describe our deep neural network architecture for generating sequence of decisions d1, . . . , dT under the STGG framework. To accurately recognize the decision process, we employ the tree-based relative positional encodings on the intermediate spanning tree T . We also introduce an attention mechanism to express a probability distribution over Lres which depends on intermediate state of the algorithm.
3.1 TREE-BASED POSITIONAL ENCODING FOR MULTI-HEAD ATTENTION LAYERS
Each intermediate layer in our model is a combination of a multi-head self-attention module and a position-wise feed-forward neural network similar to that of Vaswani et al. (2017). The main difference is on how we modify the architecture to incorporate tree-based positional encodings. To be specific, let H = [h>1 , . . . , h > T ] ∈ RT×` denote the input of a self-attention module where d is the hidden dimension and ht ∈ R1×` is the hidden representation at position t. The input H is projected by three matrices WQ ∈ R`×`K ,WK ∈ R`×`K and WV ∈ R`×`V to the corresponding representations Q,K and V , respectively. A single self-attention head is then calculated as
Q = HWQ, K = HWK , V = HWV , (1) A = QK>√ `K + P, Pt1,t2 = z (1) φforward(t1,t2) + z (2) φbackward(t1,t2) + z (3) φseq(t1,t2) , (2)
Attention(H) = SoftMax(M ◦A)V, (3) where Attention(H) is output of the attention head,M is the triangular mask to forbid the model from accessing future information while making a prediction, and ◦ denotes the element-wise multiplication between matrices.
Furthermore, P is the newly introduced relative positional encoding. It is a summation over the trainable embedding vectors z(1), z(2), z(3) indexed by relative position values of φforward(t1, t2), φbackward(t1, t2), and φseq(t1, t2). To be specific, the tree-based relative positions φforward(t1, t2) and φbackward(t1, t2) denotes the number of forward and backward edges in the spanning tree path between pointer vertices at the t1-th and t2-th time step. The direction of edge is decided by order of generation in the STGG framework. Such an encoding was inspired from recent works (Villmow et al., 2021; Lukovnikov & Fischer, 2021; Ying et al., 2021) using Transformers to recognize graphs and trees. Finally, the sequence-based relative position φseq(t1, t2) = t1 − t2 denotes the relative difference of time-steps for the decisions.
3.2 ATTENTION FOR UPDATING RESIDUAL EDGES.
Our model generates a categorical distribution over the space of X = Xatom ∪ Xbond ∪ Lres ∪ {"(", ")", "*", "[eos]"}. It is relatively straight-forward to output an unnormalized probability over values of Xatom ∪ Xbond ∪ {"(", ")", "*"} using a linear classifier on top of the Transformer model. However, it is non-trivial to assign probability values for res bond operation, i.e., decisions values of d ∈ Lres, since Lres varies between different time-steps. To handle this case, we use an attention-based mechanism for assigning unnormalized probability to decision values in the list Lres. To be specific, at the final layer of our model, we obtain the following probability distribution p(d).
p(d) ∝ { mg(d) ·mv(d) · exp(w>d h) ∀d ∈ Xatom ∪ Xbond ∪ {"(", ")", "*"}, mg(d) ·mv(d) · exp ( h>dW1W > 2 h ) ∀d ∈ Lres, (4) where wd ∈ R1×` is a decision-specific vector, W1,W2 ∈ R`× ˜̀ are weight matrices, and h is the decision embedding, i.e., output of the Transformer layer corresponding to the previously made decision. Furthermore, hd is the embedding corresponding to a past decision d ∈ Lres. Finally, mv(d),mg(d) are the mask for excluding invalid decisions that violate the validity of graph generation and valence rule, respectively. The masks are obtained using the criteria explained in Section 2.3. We use the mask during both training and evaluation of the model; this differs from existing graphgenerative models which forbid invalid decisions only at evaluation using a sample-rejection scheme.
4 RELATED WORKS
SMILES-based molecular generative models. Several studies proposed to generate a SMILES representation of molecules using string-based (Gómez-Bombarelli et al., 2016; Segler et al., 2018; Kim et al., 2021) or grammar-based (Kusner et al., 2017; Dai et al., 2018) models. While our STGG is largely inspired from such works, our STGG allows realizing the intermediate graph structure of the molecule being constructed while the SMILES-based models cannot. This difference allows the adoption of structure-aware deep neural networks to STGG. To be specific, the difference between STGG and the SMILES-based models appears from our newly introduced graph construction procedure using a pointer vertex ipoint, a vertex-list L, and a vertex-stack S . They allow recognizing an incomplete sequence of decisions as a graph and assigning positions to each decision. In contrast,
an incomplete SMILES string does not define a graph structure and assigning positions to each character is non-trivial.
Graph-based molecular generative models. Researchers have developed a large variety of molecular graph generation frameworks based on atom-wise and bond-wise operations (You et al., 2018; Kajino, 2019; Popova et al., 2019; Madhawa et al., 2019; Honda et al., 2019; Shi et al., 2020; Zang & Wang, 2020; Luo et al., 2021). Our STGG framework simplifies the decision space of such models by exploiting the tree-like graph structures of molecules. To be specific, STGG requires O(|A|+ |B|) decisions for constructing a molecule while the existing atom-by-atom graph generative models typically requireO(|A|2) decisions. This implies that our generative model requires a smaller number of decisions for sparse graphs like molecules, i.e., when |B| is small. Furthermore, our work is the first to successfully train a Transformer architecture (Vaswani et al., 2017) for graph-based molecule generation.
In another line of research, several works (Jin et al., 2018; 2019; 2020) proposed generative models based on using the junction-tree representation with molecular substructures as building blocks. Based on such a representation, such works utilize tree-constructive operations to generate the full graph. Since they operate on such a coarse-grained molecular representation, they typically require a fewer number of building blocks to generate the whole molecule. In comparison, our STGG framework utilizes a more fine-grained molecular representation and may additionally learn the inner semantics of substructures that are used as building blocks for the junction tree.
5 EXPERIMENT
In this section, we report the experimental results of the proposed spanning tree-based graph generation (STGG) framework. In Section 5.1 and 5.2, we compare with the existing graph generative models in the ZINC250K (Irwin et al., 2012) and QM9 (Ramakrishnan et al., 2014). We provide ablation studies on each component of our method using the ZINC250K dataset. In Section 5.2, we compare with the existing molecule generative models using the MOSES benchmark (Polykovskiy et al., 2020). Finally, in Section 5.3, we provide our results on the molecular optimization task with respect to the penalized octanol-water partition coefficient function (PLOGP). We provide the implementation details and illustration of the generated molecules in Appendix D and E, respectively.
5.1 MOLECULE GENERATION ON ZINC250K AND QM9 DATASETS
We first compare to the literature standard for the molecular generation task in the ZINC250K and the QM9 datasets. To this end, we train our generative model on the respective datasets and sample 10,000 molecules to measure (a) the ratio of valid molecules (VALID), (b) the ratio of unique molecules (UNIQUE), and (c) the ratio of novel molecules with respect to the training dataset (NOVEL). We compare with the numbers reported by recently proposed graph generative models (Shi et al., 2020; Luo et al., 2021). We also provide an additional baseline of a transformer architecture trained to generate the SMILES representation for the molecule (SMILES-TRANSFORMER).
We mark CORRECTABLE for methods which can optionally use a sample-rejection scheme to forbid decisions that violate the chemical rules at evaluation. Note that our framework can train the generative model under the valence correction mask during training, while existing graph generative
models use the valence correction only at evaluation. However, for comparison, we do not use the valence correction mask during training in this experiment.
We report the experimental results in Table 2. In the table, we observe that our STGG framework outperforms all the existing molecular graph generative models for high VALID at the cost of relatively lower NOVEL. In particular, our generative model can achieve a 100% ratio of valid molecules in the QM9 dataset even without any correction procedure. Such a result highlights the how our model can effectively learn the chemical rules and model the underlying distribution.
Finally, our STGG framework performing better than the SMILES-based transformer implies how the performance of our generative model stems from the STGG framework, rather than using the Transformer architecture.
Ablation studies. We also conduct ablation studies on the ZINC250k dataset to verify the effectiveness of our method. To this end, we report the experimental results of our method without specific components. To be specific, we ablate the effects of using sequential relative positional encoding (S), treebased relative positional encoding (T), graph-construction mask (G), and valence rule mask (V). We also consider an additional baseline of using the absolute positional encoding (A) as in the original Transformer architecture (Vaswani et al., 2017). In Table 3 and Figure 4, one can observe how each component of
our algorithm is crucial for achieving high VALID. In particular, the tree encoding is essential for the performance, showing the importance of tree-based representation that we use in our model.
5.2 MOLECULE GENERATION ON THE MOSES BENCHMARK
We also compare our method on the MOSES benchmark with the existing models. The MOSES benchmark offers a large collection of metrics to access the overall quality of generated molecules. To be specific, in addition to VALID, UNIQUE, NOVEL, we consider internal diversity of molecules (INTDIV), ratio of samples being accepted to chemical filters (FILTERS), Frétchet ChemNet Distance (FCD), nearest neighborhood similarity (SNN), frament similarity (FRAG), and Scaffold similarity
(SCAF). The similarity metrics of FCD, SNN, FRAG, SCAF are measured with respect to the test dataset of molecules and the scaffolds extracted from them.
In Table 4 and 5, we provide our experimental result. Here, one can observe how our algorithm outperforms the existing works for 10 out of 15 metrics including FILTERS, FCD-TEST, FCD-TESTSF, SNN-TEST, SNN-TESTSF, FRAG-TEST, FRAG-TESTSF, and SCAF-TEST. This highlights the ability of our STGG framework to successfully learn the training distribution.
5.3 MOLECULAR OPTIMIZATION FOR PENALIZED OCTANOL-WATER PARTITION COEFFICIENT
Finally, we demonstrate the usefulness of our STGG framework for the task of molecular optimization. To this end, we consider the literature standard of maximizing the penalized octanol-water partition coefficient (PLOGP). However, several works (Gao & Coley, 2020; Coley, 2020) have noted how the existing algorithms on this benchmark may not be practical, since PLOGP is ill-defined as a scoring function for molecules; this scoring function may assign high values to “unrealistic” molecules that are unstable and hard to synthesize in practice.
To consider this aspect, we propose a new algorithm which can control the quality of molecules by trading off scores and realistic-ness of molecules. Using this algorithm, we demonstrate how our STGG is capable of generating both (a) high-scoring molecules and (b) realistic molecules with a reasonably high score. At a high-level, we train a conditional generative model pθ(m|γ) under the STGG framework with PLOGP as the condition γ. At the test time, we sample from a high value γ to obtain high-scoring molecules. Such an algorithm is inspired from the recent offline reinforcement learning algorithms (Schmidhuber, 2019; Kumar & Levine, 2020; Chen et al., 2021; Janner et al., 2021). We fully describe our molecular optimization algorithm in Appendix F.
In Table 5 and Figure 6, we report the result of our molecular optimization experiment. We provide additional illustrations of the generated molecules in Appendix G. Here, our STGG model is able to generate molecules with considerably high PLOGP scores outside the training distribution. Furthermore, in Figure 6 and Appendix G, one can observe how increasing γ gradually changes the optimized molecule from realistic structures to large, chain-like, and unrealistic structures.3 Given such results, one may conclude that our STGG combined with the offline optimization algorithm can successfully make a trade-off between high PLOGP and realistic-ness of the generated molecules. However, we also remark that our results do not imply our optimization results to be strictly better than the baselines; we believe it is necessary to develop and incorporate quantitative measures for realistic-ness of molecules to fairly evaluate the molecular optimization algorithms. We believe such a research to be a important future direction.
6 CONCLUSION
In this paper, we propose STGG which is the first spanning tree-based framework for the generation of molecules using the Transformer architecture. The key idea of using the spanning tree for graph generation applies to any graph type outside the molecules; we believe such an extension of our work to be both promising and interesting. We also propose an offline algorithm for molecular optimization which allows the trade-off between the high score and the realistic-ness of molecules. We leave more investigation of the newly proposed optimization algorithm as future work.
3This is in agreement with prior works (Shi et al., 2020; Ahn et al., 2020; Luo et al., 2021).
7 REPRODUCIBILITY STATEMENT
We provide explicit description of our algorithm in Algorithm 1, Appendix 2, 3, and 4. We list the hyper-parameters, the hardware used for the experiments, and the data-processing information in Appendix D. We provide illustrations of the molecules generated for the experiments in Figure 6, and Appendix E, G. We submit the full implementation of our STGG framework and the baselines used in our experiments as a supplementary material.
A EXTRACTING SEQUENCE OF DECISIONS FROM A MOLECULAR GRAPH
In this section, we explain our algorithm for finding a sequence of decisions to construct a given molecular graph G = (A,B, E). The high-level idea is to first perform a depth-first search on G to find a spanning tree T = (A,B, ET ) and the corresponding set of residual edges ER = E\ET . Then the algorithm traverses the spanning tree T according to the depth-first search tree while (a) allocating branch start and branch end for vertices with degree higher than two and (b) adding res atom and res bond operations for any vertex covered by a residual edge {a, b} ∈ ER. To this end, we utilize a stack Sdfs that stores the list of vertices in G and branching tokens {"(", ")"} to visit. At each iteration, an element i of the stack Sdfs is popped. If i is a vertex, the algorithm adds the corresponding decision for attach atom and attach bond operations. If the vertex has more than two successors with respect to the spanning tree T , the successors are inserted into the stack Sdfs with surrounding "(" and ")" tokens. If the vertex has only one successor, the successor is inserted into the stack without an additional operation. When the branching tokens {"(", ")"} are popped from the stack, the algorithm adds the corresponding decision value to the sequence of decisions. We describe the full scheme in Algorithm 2.
Algorithm 2 Generating sequence of decisions for a molecular graphs 1: Input: graph G = (A,B, E), atom attributes {xa}a∈A, and bond attributes {xb}b∈B 2: Find a spanning tree T = (A,B, ET ) of G based on depth-first order and set ER ← E \ ET . 3: Initialize an empty sequence of decisions D. 4: Choose the root a ∈ A of T to insert in an empty stack Sbranch. 5: do 6: Pop i from Sbranch. 7: if i ∈ A ∪ B then 8: Append xi to D. . Decision to attach the atom vertex. 9: for j ∈ {j|j ∈ N (i), {i, j} ∈ ER} do . Decisions for residual edges. 10: If i ∈ A, append "*" to D. 11: If i ∈ B, append j to D. 12: Let V denote {j|j ∈ N (i), j 6∈ AT ∪ BT }. . Successors of i in depth-first order. 13: If |V| > 1, insert "(" to Sbranch. . Allocate decision to record pointer vertex. 14: Insert vertices in V to Sbranch. . Allocate successors to visit later. 15: If |V| > 1, insert ")" to Sbranch. . Allocate decision to return to the pointer vertex. 16: if i ∈ {"(", ")"} then 17: Append i to D. 18: while |Sbranch| > 0 19: Output: sequence of decisions D = d1, . . . , dT to reconstruct G.
B ALGORITHMS FOR GRAPH MASKING
To determine whether if a sequence of decisions lead to a valid generation of a molecular graph, we propose an algorithm that outputs a set of valid decisions given the current decision d, stack of pointer vertices S , and list of atom vertices L during execution of Algorithhm 1. We provide the full description in Algorithm 3.
Algorithm 3 Determination of grammar violation 1: Input: current decision d, stack Sbranch, and list Lres. 2: Output: List of candidate decisions D that are valid. 3: if d ∈ Xatom then 4: Set D ← Xbond. . When the atom vertex is followed by bond vertex. 5: Set D ← D ∪ {"("} . When the atom vertex has more than one successors. 6: Set D ← D ∪ {"*"} . When the atom vertex has a neighboring residual edge. 7: If |Sbranch| > 0, set D ← D ∪ {")"}. . The ")" decision appears only when Sbranch is non-empty 8: Set D ← D ∪ {"[eos]"}. . Allow termination. 9: if d ∈ Xbond then 10: Set D ← Xatom. . When the bond vertex is followed by atom vertex. 11: Set D ← D ∪ Lres . When the bond vertex has a neighboring residual edge. 12: if d = "*" then 13: Set D ← Xbond. . When the atom vertex is followed by bond vertex. 14: Set D ← D ∪ {"("} . When the atom vertex has more than one successors. 15: Set D ← D ∪ {"*"} . When the atom vertex has a neighboring residual edge. 16: If |Sbranch| > 0, set D ← D ∪ {")"}. . The ")" decision appears only when Sbranch is non-empty 17: Set D ← D ∪ {"[eos]"}. . Allow termination. 18: if d ∈ Lres then 19: Set D ← {)}. . Residual edge is only constructed at end of each branch. 20: if d = "(" then 21: Set D ← Xbond. . Branch always starts with a bond vertex. 22: Set D ← D ∪ {"[eos]"}. . Allow termination. 23: if d = ")" then 24: Set D ← Xbond ∪ {"(", ")"}. . Branch is followed by start or end of another branch. 25: Set D ← D ∪ {"[eos]"}. . Allow termination.
We also establish theoretical result on how the sequence of decisions generated from Algorithm 3 is always a valid sequence of decisions for Algorithm 1. To this end, we define a valid molecular graph as follows.
Definition 1. A valid molecular graph G = (A,B, E) is a connected bipartite graph where the number of vertices adjacent to any bond vertex b ∈ B is exactly two, i.e., |N (b)| = 2.
Such a definition implies how a molecule should have exactly two atoms connected to a bond. Combined with additional conditions to guarantee the well-behavior of Algorithm 1 on sequence of decisions, we obtain the following result.
Theorem 1. Let G = (A,B, E), S , and L be a graph, a stack of vertices, and a list of vertices being updated by Algorithm 1 and a sequence of decisions d1, . . . , dT . If the sequence of decisions satisfies the criteria defined by Algorithm 3, the following properties are satisfied.
P1 At the t-th step of Algorithm 1, |S| > 0 if dt = ")".
P2 At the t-th step of Algorithm 1, dt ∈ L if dt ∈ A ∪ B.
P3 When dT = [eos], the graph G is a valid molecular graph.
Here, P1 and P2 implies how the operations in Algorithm 1 are well-defined for d1, . . . , dT .
Proof. First, P1 is enforced by the step in Algorithm 3 which forbids the decision value of ")" when the stack S is empty. Next, P2 is enforced by the step selecting decision values from the current list of vertices L. To enforce P3, when dT = [eos], G (a) has to be a connected bipartite graph and (b) the number of vertices adjacent to any bond vertex has to be exactly two. For (a), Algorithm 3 allows the decision of dt ∈ Xatom ∪ L only when the pointer vertex is a bond vertex, i.e., d ∈ Xbond. Similarly, dt ∈ Xbond is allowed only when d ∈ Xatom ∪ {"*", "("}. For (b), the algorithm does not allow adding a bond vertex b ∈ B to the list of vertices L, which is required for any vertex with degree higher than two. Termination is not allowed when there exists a bond vertex with degree smaller than two.
C ALGORITHM FOR VALENCE MASKING
To consider chemical validity of molecules, our framework offers the ability to constrain its generation on molecules that satisfy the valence rule for each atom. That is, the generated graph G = (A,B, E) satisfies the constraint v(xa) ≥ ∑ b∈N (a) o(xb) for every atom vertex a ∈ A where v(xa) denotes the valence of an atom type xa and o(xb) denotes the bond order.
To this end, we propose an algorithm which iteratively updates a record r(a) of available valence for each atom vertex a ∈ A. The key idea is to (a) update the record accordingly for each addition of atom and bond orders and (b) pre-allocate valence for the branch start and res atom operations by the amount of minimum bond order minx∈Xbond o(x). The second part (b) is required since the branch start and res atom operations indicate future bond vertices to be added as a neighbor of the current atom vertex. We provide the full description in Algorithm 4.
Algorithm 4 Determination of valence rule violation
1: Input: Intermediate tree T = (AT ,BT , ET ), current pointer vertex ipoint, previous pointer vertex ĩpoint, current decision d, previous decision d̃, and record r(·) of available valence.
2: Output: Newly updated r and the list D of decisions that violates the valence rule. 3: if d ∈ Xatom then 4: Set r(ipoint)← v(d). . Initialize record by atom valence. 5: Set r(ipoint)← r(ipoint)− o(d̃). . Update record using previously added bond vertex. 6: Set D ← {x|x ∈ Xbond, o(x) > r(ipoint)}. . Reject bond orders higher than the record. 7: if r(ipoint) < minx∈Xbond o(x) then 8: Set D ← D ∪ {"(", "*"}. . Reject decisions requiring minimal amount of valence. 9: if d ∈ Xbond then 10: if d̃ 6= "(" then 11: Set r(̃ipoint)← r(̃ipoint)− o(d). . Update record of previously added atom vertex. 12: else 13: Set r(̃ipoint)← r(̃ipoint)− o(d) + minx∈Xbond o(x).
. Update previously added atom vertex considering pre-allocated valence.
14: Set D ← {x|x ∈ Xatom, v(x) < r(ipoint))}. . Reject atom valence lower than bond order. 15: Set D ← D ∪ {x|x ∈ Lres, r(x) < o(d)−minx∈Xbond o(x)}.
. Reject residual edge candidates with valence lower than bond order.
16: if d = "(" then 17: Set r(ipoint)← r(ipoint)−minx∈Xbond o(x). . Pre-allocate minimum bond order. 18: Set D ← {x|x ∈ Xbond, o(x) > r(ipoint)}. . Reject bond orders higher than the record. 19: if d = ")" then 20: Set D ← ∅ 21: if d = "*" then 22: Set r(ipoint)← r(ipoint)−minx∈Xbond o(x). . Pre-allocate minimum bond order. 23: Set D ← {x|x ∈ Xbond, o(x) > r(ipoint)}. . Reject bond orders higher than the record. 24: if d ∈ Lres then 25: Set r(d)← r(d) + minx∈Xbond o(x)− o(xipoint). . Update record of previously added atom vertex. 26: Set D ← ∅.
Given Algorithm 3 and 4, we establish the following theoretical guarantee. Definition 2. A valid molecular graph G = (A,B, E) satisfies the valency rule if v(xa) ≥∑ b∈N (a) o(xb) for every atom vertex a ∈ A.
Theorem 2. Let G = (A,B, E) be a graph being updated by Algorithm 1 and a sequence of decisions d1, . . . , dT . If the sequence of decisions satisfies the criteria defined by Algorithm 3 and 4, the corresponding graph G satisfies the valency rule.
Proof. To prove the validity of our algorithms, we show that (1) r(a) ≤ v(xa)− ∑ b∈N (a) o(xb) and (2) r(a) > 0 at any time-step applying Algorithm 4 to the sequence of decisions d1, . . . , dT .
For (1), we note that a record of an atom a is initialized as v(xa)− ∑ b∈N (a) o(xb) whenever it is newly added to the graph G by a decision. Furthermore, whenever a new edge is added by attach bond and res bond operation, the corresponding bond order o(xb) is deducted from the record. Importantly, a minimum bond order minx∈Xbond o(x) is also added to the record for a attach bond operation consecutive to a branch start operation or a res bond operation. We note how this does not harm (1) since the minimum bond order has already been deducted (or pre-allocated) by the corresponding branch start and res atom operations, respectively.
For the case of (2), one can observe that Algorithm 4 filters out the atoms and bonds that will deduct the record of the corresponding atom to be negative. This completes proves the correctness of our theorem.
D IMPLEMENTATION DETAILS
In this section, we provide specific details on how we implement the STGG framework for our experiments.
Training detail. For all the experiments, we train the Transformer under STGG framework for 100 epochs with batch size of 128 for all the dataset. We use the AdamW (Loshchilov & Hutter, 2019) optimizer with constant learning rate of 10−4. We use three and six Transformer layers for {QM9, ZINC250K} and MOSES, respectively. The rest of Transformer-related configurations follow that of the original work (Vaswani et al., 2017); we use the attention module with embedding size of 1024 with eight heads, MLP with dimension of 2048, and dropout with probability of 0.1. Using a single Quadro RTX 6000 GPU, it takes approximately three, ten, and 96 hours to fully train the models on QM9, ZINC250K, and MOSES datasets, respectively.
Pre-processing. For the whole dataset, we use the following set of atom vocabularies Xatom: { "CH", "CH2", "CH-", "CH2-", "C", "N-", "NH-", "N", "NH", "N+", "NH+", "NH2+", "NH3+", "O-", "O", "O+", "OH+", "F", "P", "PH", "PH2", "P+", "PH+", "S-", "S", "S+", "SH", "SH+", "Cl", "Br", "I", }. Note that we assign different features for the same atom numbers with different number of explicit hydrogens and formal charges. This allows our algorithm to properly allocate maximum valence for each atom feature. Next, we use the bond vocabularyXbond = {"-", "=", "#"}, corresponding to bond orders of single, double, and triple, respectively. For explicit calculation of the atom valence during molecular construction, we train our models on kekulized molecules, i.e., aromatic bonds are fixed to single or double bonds.
E EXAMPLE OF GENERATED MOLECULES
F OFFLINE OPTIMIZATION OF MOLECULES
In this section, we describe our offline molecular optimization algorithm mainly inspired by existing works in offline reinforcement learning (Schmidhuber, 2019; Chen et al., 2021; Janner et al., 2021) and offline model-based optimization (Kumar & Levine, 2020).
For maximizing a reward function defined on a molecule, our algorithm consists of two simple steps. First, our offline optimization algorithm trains a conditional generative model pθ(m|γ) where m is the molecule and γ is the reward function evaluated on the offline dataset of molecules. Next, the reward-conditional generative model samples highly-rewarding molecules by generation conditioned on high values of γ. In particular, we set the value of γ to extrapolate outside the training dataset. Based on the highly expressive power of Transformer architecture, our algorithm can successfully generate highly-rewarding molecules.
G ADDITIONAL EXPERIMENTAL RESULTS ON MOLECULAR OPTIMIZATION
H COMPARISON WITH CG-VAE
In this section, we additionally compare our STGG framework with the CG-VAE model (Liu et al., 2018), which is another atom-by-atom graph generative model that allows masking out the action space to generate molecules satisfying the valence rules. Compared to Table 2, we additionally use the FCD, SNN, Frag, Scaf metrics to measure the faithfulness of the generative models for learning the underlying distribution of molecules. Note that the VALID metric used in Table 2 is insufficient to compare the faithfulness of STGG and CG-VAE since they both enjoy the guarantee to generate molecules satisfying the valence rule.
In Table 6 and 7, one can observe that our algorithm highly outperforms the CG-VAE in terms of faithfully learning the underlying distribution of molecules at the cost of relatively lower UNIQUE. For example, FCD score of our STGG for learning the ZINC dataset is 0.2775 while that of the CG-VAE is 11.33. This highlights the expressive power of our STGG framework. | 1. What is the main contribution of the paper in the field of graph generation?
2. What are the strengths of the proposed approach, particularly in its novelty and conceptual departure from prior methods?
3. What are the weaknesses of the paper regarding the validation and establishment of some of the design decisions?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a new paradigm of graph generation, a transformer network sequentially generates a sequence of decisions to generate a spanning tree for a bipartite graph. These decisions have 7 forms such as attach atom, attach bond, branch start, etc. After the spanning tree, they add residual edges. The authors also have a focus on generating valid graphs (which is seen in the results) in which they mask out invalid decisions during the generation process itself.
Review
Strengths: I think this work has a lot of novelty and merit. The idea of using a transformer which can view earlier decisions with attention mechanisms increases the possibility of valid molecular generation. This also forms a conceptual departure from highly sequential or reinforcement learning-based methods. They show their results on 3 datasets MOSES, QM9 as well as ZINC250 K. As can be seen in the results, many previous methods do not run well on QM9 and are often overfit for 1-2 benchmarks. The authors also perform ablation tests which show the improvements with several of their design choices including the use of graph structure, the use of a transformer and the use of graph construction and valence masks.
Weaknesses:
Although the ablation studies that are used by the author validate some of the design decisions they don't validate or establish the need for several of their most novel decisions: 1. The two-pronged system of first generating spanning trees and then residual edges, 2. the use of a bipartite graph rather than one that uses edges to represent bonds, 3. the use of spanning trees themselves. In addition I would like to see some results on how long it takes to train this transformer model as opposed to other models, due to what I believe is a highly increased number of parameters given the positional encoding and the attention mechanisms. |
ICLR | Title
Spanning Tree-based Graph Generation for Molecules
Abstract
In this paper, we explore the problem of generating molecules using deep neural networks, which has recently gained much interest in chemistry. To this end, we propose a spanning tree-based graph generation (STGG) framework based on formulating molecular graph generation as a construction of a spanning tree and the residual edges. Such a formulation exploits the sparsity of molecular graphs and allows using compact tree-constructive operations to define the molecular graph connectivity. Based on the intermediate graph structure of the construction process, our framework can constrain its generation to molecular graphs that satisfy the chemical valence rules. We also newly design a Transformer architecture with tree-based relative positional encodings for realizing the tree construction procedure. Experiments on QM9, ZINC250k, and MOSES benchmarks verify the effectiveness of the proposed framework in metrics such as validity, Fréchet ChemNet distance, and fragment similarity. We also demonstrate the usefulness of STGG in maximizing penalized LogP value of molecules.
1 INTRODUCTION
Researchers have extensively studied graph generative models, dating back to the early works of Erdös Rényi (Erdös et al., 1959). Recently, models based on deep neural networks (DNNs) have gained much attraction due to their expressive power in learning a graph dataset. The molecule-generating DNNs stand out among them for their success in the task of drug discovery.
Recent works have proposed molecule-generating DNNs based on string-based and graph-based representations (Segler et al., 2018; Jin et al., 2018; You et al., 2018; Shi et al., 2020; Jin et al., 2020). For example, Segler et al. (2018) proposed to train language models on the domain-specific linear string representation of molecules, i.e., simplified molecular-input line-entry system (SMILES, Weininger 1988). Since the string-based models ignore the inherent graph structure, recent works explore the graph-based generation that use (a) atom-by-atom (You et al., 2018; Shi et al., 2020; Luo et al., 2021) or (b) substructure-based (Jin et al., 2018; 2019; 2020) operations.
Notably, the substructure-based generative models (Jin et al., 2018; 2019; 2020) successfully exploit the molecular prior knowledge: the graphs are sparsely connected and can be represented as a junction tree with molecular substructure as building blocks. Based on such knowledge, the models use the junction tree construction operators which (a) require a fewer number of steps to generate the whole molecular graph and (b) guarantee generating molecules that satisfy the chemical valence rules. However, despite such advantages, a recent benchmark (Polykovskiy et al., 2020) suggests that they do not outperform the existing methods in terms of learning the data distribution, even when compared with the simple SMILES-based language models. We hypothesize that this is due to the models using a coarse-grained representation of the molecule and they may lack the ability to learn the inner semantics of each substructure-based building block.
Contribution. In this work, we propose a novel framework, coined spanning tree-based graph generation (STGG), for fine-grained generation of molecules while exploiting their sparsity.1 Mainly inspired from the SMILES representation of molecules, our idea is to generate the molecular graph
1While our framework is designed for general sparse graphs, we focus on the molecular graphs in this paper.
as a composition of a spanning tree and the corresponding residual edges with atoms and bonds as building blocks. Such a formulation allows our framework to utilize compact tree-constructive operations to define the molecular graph connectivity. See Figure 1 for an illustration of how we formulate the generation of a molecular graph as a sequence of tree-constructive operations.
Since our framework maintains the molecular graph structure during construction, it can pre-determine decisions that (a) violate the graph construction rule and (b) lead to molecules that violate the chemical valence rule. Such criteria allow control over the generative model to guarantee generating valid molecular graphs by forbidding invalid actions. This is in contrast to prior works (Shi et al., 2020; Luo et al., 2021) that generate the molecular graph atom-by-atom but determines the validity of construction operations through a sample-rejection scheme.
To recognize the spanning tree-based representation used in our STGG framework, we propose a Transformer architecture (Vaswani et al., 2017) with tree-based relative encoding. Inspired by recent works (Villmow et al., 2021; Lukovnikov & Fischer, 2021; Ying et al., 2021) on tree-based and graph-based Transformers, our framework expresses the relative position between two vertices as the number of forward and reverse edges in the shortest path between them. We also introduce an attention-based mechanism for constructing residual edges.
We experiment on popular graph generation benchmarks of QM9, ZINC250K, and MOSES to validate the effectiveness of our algorithm. In the experiments on QM9 and ZINC, our STGG framework outperforms the existing graph-based generative models by a large margin. In the MOSES benchmark, our algorithm achieves superior performance compared to both string-based and graphbased methods for majority of the metrics, e.g., Fréchet ChemNet distance (Preuer et al., 2018) and fragment-based similarity. We also conduct experiments on the offline optimization task for high penalized octanol-water partition coefficient and achieve competitive results.
2 SPANNING TREE-BASED GENERATION OF GRAPHS (STGG)
2.1 OVERVIEW
In this section, we introduce our spanning tree-based graph generation (STGG) framework to sequentially generate a molecule as a composition of a spanning tree and residual edges. To this end, we propose compact tree-constructive operations inspired by the simplified molecular-input line-entry system (SMILES, Weininger, 1988). In contrast to the existing SMILES-based molecular generative methods, our framework (a) allows inferring the intermediate graph structure and (b) is generally applicable to graph types other than molecules. In particular, (a) further enables our framework to control the construction process such that the sequential operations comply with tree-constructive grammar and only generate molecules satisfying the chemical valence rule.
Molecular graph representation. To apply our framework, we represent a molecule as a bipartite graph G = (A,B, E) where A and B are the set of vertices associated with atoms and bonds of the molecule, respectively.2 Each edge {a, b} ∈ E is assigned for each adjacent pair of atom and bond. We assign attributes xa ∈ Xatom and xb ∈ Xbond for vertices a ∈ A and b ∈ B to indicate the corresponding atom type and bond order, respectively. For example, {"C", "N", "O"} ⊆ Xatom and {"-", "="} ⊆ Xbond. See Figure 1 for an example of such a molecular graph representation. Molecular graph from sequence of decisions. To generate the molecular graph G = (A,B, E), our framework makes a sequence of decisions d1, . . . , dT to generate a spanning tree T = (AT ,BT , ET )
2Many existing works, e.g., (Shi et al., 2020), use a non-bipartite graph with bonds assigned to edges.
and a set of residual edges ER = E \ET . At each iteration, seven types of decisions are applicable, i.e., attach atom, attach bond, branch start, branch end, res atom, res bond, and terminate. See Table 1 for examples of decisions and the corresponding operations. We provide a detailed description of the graph construction process in Section 2.2.
Generating valid molecular graphs. Without any control, a model may generate decisions that (a) do not comply with the grammar of STGG or (b) leads to a molecule violating the chemical valence rule. To prevent this scenario, we conduct two criteria for determining validity of the given decision for (a) and (b). We further elaborate this in Section 2.3.
2.2 DECISION PROCESS FOR SPANNING TREE-BASED GRAPH GENERATION
We now explain how our STGG framework incorporates the decisions d1, . . . , dT to build the spanning tree T = (AT ,BT , ET ) and residual edges ER from scratch. To this end, our framework introduces the state information of (a) a pointer vertex ipoint ∈ AT ∪ BT for specifying the target of
Algorithm 1 Tree-based generation of molecular graphs 1: Input: sequence of decisions d1, . . . , dT . 2: Output: graph G = (A,B, E), atom attributes {xa}a∈A, and bond attributes {xb}b∈B 3: Set AT ← ∅, BT ← ∅, ET ← ∅, ER ← ∅, and T ← (AT ,BT , ET ). . Initialize the empty graph. 4: Set Lres as an empty list and Sbranch as an empty stack. 5: for t = 1, . . . , T do 6: if dt ∈ Xatom then . Add a new atom vertex. 7: Create a new atom vertex a and set A ← A∪ {b} and xa ← dt. 8: If |BT | > 0, set ET ← ET ∪ {{a, ipoint}}. . Edge is added when tree is not empty. 9: Set ipoint ← a.
10: if dt ∈ Xbond then . Add a new bond vertex. 11: Create a new bond vertex b and set ET ← ET ∪ {{b, ipoint}}, BT ← BT ∪ {b}, and xb ← dt. 12: Set ipoint ← b. 13: if dt = "*" then Insert ipoint into Lres. . Add pointer vertex into the queue. 14: if dt ∈ Lres then Pop dt from Lres and update ER ← ER ∪ {{ipoint, dt}}. . Add a new residual edge. 15: if dt = "(" then Insert ipoint into Sbranch. . Add pointer vertex into the stack. 16: if dt = ")" then Set ipoint ← pop(Sbranch) . Update pointer vertex from the stack. 17: Set A ← AT , B ← BT , and E ← ET ∪ ER.
the next operation, (b) a stack Sbranch that stores vertices to use later as a starting point of a “branch” in the spanning tree, and (c) a list Lres that stores vertices to use later for constructing residual edges. In what follows, we describe the seven types of operations, i.e., attach atom, attach bond, branch start, branch end, res atom, res bond, and terminate, corresponding to decision values d ∈ Xatom ∪ Xbond ∪ Lres ∪ {"(", ")", "*", "[eos]"} in detail. See Table 1 for the pairs of operations and the corresponding decisions. We also provide an example of the graph construction process in Figure 2.
Attaching atom and bond vertices to the spanning tree. If the decision d specifies one of the atom or bond attributes, i.e., d ∈ Xatom or d ∈ Xbond, it applies the corresponding attach atom and attach bond operations, respectively. To be specific, the attach atom operation adds a new atom vertex a into the spanning tree T as a neighbor of the pointer vertex ipointer, i.e., AT ← AT ∪ {a}, ET ← ET ∪ {a, ipointer}. The value d is set as the new atom attribute, i.e., xa ← d. The newly added vertex is set as the next pointer vertex, i.e., ipointer ← a. The attach bond operation similarly adds a new bond vertex. For example, a line graph can be expressed as a sequence of attach atom and attach bond operations, e.g., C-C-C where "C" ∈ XA and "-" ∈ XB. Branching out the spanning tree. To express graph structures with vertices of degree larger than two, our framework utilizes pairs of the branch start and the branch end operations with decision values of "(" and ")", respectively. To be specific, the branch start operation inserts the current pointer vertex into a stack Sbranch of vertices. Then the branch end operation pops a vertex from the stack Sbranch and sets it as the new pointer vertex. For an example, a graph with one atom vertex of degree three is constructed from a sequence of decisions C-C(-C)(-C).
Adding residual edges. To construct cyclic molecular graphs, our framework generates residual edges based on pairs of res atom operation and res bond operation, corresponding to decision values of "*" and d ∈ Lres, respectively. To be specific, the res atom operation inserts the current (atom) pointer vertex into a list Lres. Next, when the decision value d ∈ Lres is received for the res bond operation, the corresponding vertex d is popped from the list Lres and forms a new residual edge with the current (bond) pointer vertex, i.e., ER ← ER ∪ {{d, ipointer}}. For an example, a cyclic molecular graph is constructed from a sequence of decisions C*-O-O-1, where "1" indicates res bond operation with decision of the first atom vertex with attribute "C".
Termination. The decision "[eos]" applies the terminate operation to finish the construction. We provide the full algorithm in Algorithm 1. We also provide an algorithm to extract a sequence of decisions for constructing a given graph in Appendix A. Such an algorithm is used to obtain sequence of decisions as targets for training the generative model under the STGG framework.
2.3 MASKING OUT INVALID DECISIONS FOR A VALID MOLECULAR GRAPH
Based on Algorithm 1, we develop two criteria for determining whether if a sequence of decisions leads to (a) valid generation of a molecular graph and (b) generation of a molecule satisfying the valence rule. Such criteria are used to mask out invalid decisions to guarantee generating a valid molecular graph.
Validity of graph generation. To determine whether if a sequence of decisions lead to a valid generation of a molecular graph, we propose an algorithm that outputs a set of valid decisions given the previous decision d, stack of pointer vertices Sbranch, and list of atom vertices Lres during execution of Algorithm 1. In what follows, we provide a brief description of grammars enforced by the algorithm. We also provide the detailed algorithm in Appendix B.
• The branch end operation only appears when the stack of pointer vertex Sbranch is non-empty. • The operations res atom and res bond are atom-specific and bond-specific, hence they only
appear when the pointer vertex is located at an atom vertex and a bond vertex, respectively. • All the bond vertices have degree of two, hence branch start and branch end operations only
appear when the pointer vertex is located at an atom vertex. • The stack do not contain duplicates of the a pointer vertex at the same time.
Here, we note that our criteria for valid molecular graph generation do not enforce branches and rings to be closed, e.g., C*=C-C#N is allowed by our criteria. This does not violate the validity since our Algorithm 1 may still define a valid molecular graph by ignoring the open branches and the open rings during construction, i.e., C*=C-C#N generates a molecule identical to that of C=C-C#N.
Validity of satisfying the valence rule. To consider chemical validity of molecules, our framework offers the ability to constrain the its generation on molecules that satisfy the valence rule for each atom. That is, the generated graph G = (A,B, E) satisfies the constraint v(xa) ≥ ∑ b∈N (a) o(xb) for every atom vertex a ∈ A where v(xa) denotes the valence of an atom type xa and o(xb) denotes the bond order. To this end, we keep a record r(a) of available valence for each atom a ∈ A and update them for each decision. For example, when a bond vertex b is newly added, record of the neighboring atom vertex a is updated by r(a) ← v(a) − o(xb). The main idea is to forbid actions that lead to negative values of r(a). We provide a detailed algorithm in Appendix C.
3 TRANSFORMER ARCHITECTURE FOR TREE-BASED GENERATION
In this section, we describe our deep neural network architecture for generating sequence of decisions d1, . . . , dT under the STGG framework. To accurately recognize the decision process, we employ the tree-based relative positional encodings on the intermediate spanning tree T . We also introduce an attention mechanism to express a probability distribution over Lres which depends on intermediate state of the algorithm.
3.1 TREE-BASED POSITIONAL ENCODING FOR MULTI-HEAD ATTENTION LAYERS
Each intermediate layer in our model is a combination of a multi-head self-attention module and a position-wise feed-forward neural network similar to that of Vaswani et al. (2017). The main difference is on how we modify the architecture to incorporate tree-based positional encodings. To be specific, let H = [h>1 , . . . , h > T ] ∈ RT×` denote the input of a self-attention module where d is the hidden dimension and ht ∈ R1×` is the hidden representation at position t. The input H is projected by three matrices WQ ∈ R`×`K ,WK ∈ R`×`K and WV ∈ R`×`V to the corresponding representations Q,K and V , respectively. A single self-attention head is then calculated as
Q = HWQ, K = HWK , V = HWV , (1) A = QK>√ `K + P, Pt1,t2 = z (1) φforward(t1,t2) + z (2) φbackward(t1,t2) + z (3) φseq(t1,t2) , (2)
Attention(H) = SoftMax(M ◦A)V, (3) where Attention(H) is output of the attention head,M is the triangular mask to forbid the model from accessing future information while making a prediction, and ◦ denotes the element-wise multiplication between matrices.
Furthermore, P is the newly introduced relative positional encoding. It is a summation over the trainable embedding vectors z(1), z(2), z(3) indexed by relative position values of φforward(t1, t2), φbackward(t1, t2), and φseq(t1, t2). To be specific, the tree-based relative positions φforward(t1, t2) and φbackward(t1, t2) denotes the number of forward and backward edges in the spanning tree path between pointer vertices at the t1-th and t2-th time step. The direction of edge is decided by order of generation in the STGG framework. Such an encoding was inspired from recent works (Villmow et al., 2021; Lukovnikov & Fischer, 2021; Ying et al., 2021) using Transformers to recognize graphs and trees. Finally, the sequence-based relative position φseq(t1, t2) = t1 − t2 denotes the relative difference of time-steps for the decisions.
3.2 ATTENTION FOR UPDATING RESIDUAL EDGES.
Our model generates a categorical distribution over the space of X = Xatom ∪ Xbond ∪ Lres ∪ {"(", ")", "*", "[eos]"}. It is relatively straight-forward to output an unnormalized probability over values of Xatom ∪ Xbond ∪ {"(", ")", "*"} using a linear classifier on top of the Transformer model. However, it is non-trivial to assign probability values for res bond operation, i.e., decisions values of d ∈ Lres, since Lres varies between different time-steps. To handle this case, we use an attention-based mechanism for assigning unnormalized probability to decision values in the list Lres. To be specific, at the final layer of our model, we obtain the following probability distribution p(d).
p(d) ∝ { mg(d) ·mv(d) · exp(w>d h) ∀d ∈ Xatom ∪ Xbond ∪ {"(", ")", "*"}, mg(d) ·mv(d) · exp ( h>dW1W > 2 h ) ∀d ∈ Lres, (4) where wd ∈ R1×` is a decision-specific vector, W1,W2 ∈ R`× ˜̀ are weight matrices, and h is the decision embedding, i.e., output of the Transformer layer corresponding to the previously made decision. Furthermore, hd is the embedding corresponding to a past decision d ∈ Lres. Finally, mv(d),mg(d) are the mask for excluding invalid decisions that violate the validity of graph generation and valence rule, respectively. The masks are obtained using the criteria explained in Section 2.3. We use the mask during both training and evaluation of the model; this differs from existing graphgenerative models which forbid invalid decisions only at evaluation using a sample-rejection scheme.
4 RELATED WORKS
SMILES-based molecular generative models. Several studies proposed to generate a SMILES representation of molecules using string-based (Gómez-Bombarelli et al., 2016; Segler et al., 2018; Kim et al., 2021) or grammar-based (Kusner et al., 2017; Dai et al., 2018) models. While our STGG is largely inspired from such works, our STGG allows realizing the intermediate graph structure of the molecule being constructed while the SMILES-based models cannot. This difference allows the adoption of structure-aware deep neural networks to STGG. To be specific, the difference between STGG and the SMILES-based models appears from our newly introduced graph construction procedure using a pointer vertex ipoint, a vertex-list L, and a vertex-stack S . They allow recognizing an incomplete sequence of decisions as a graph and assigning positions to each decision. In contrast,
an incomplete SMILES string does not define a graph structure and assigning positions to each character is non-trivial.
Graph-based molecular generative models. Researchers have developed a large variety of molecular graph generation frameworks based on atom-wise and bond-wise operations (You et al., 2018; Kajino, 2019; Popova et al., 2019; Madhawa et al., 2019; Honda et al., 2019; Shi et al., 2020; Zang & Wang, 2020; Luo et al., 2021). Our STGG framework simplifies the decision space of such models by exploiting the tree-like graph structures of molecules. To be specific, STGG requires O(|A|+ |B|) decisions for constructing a molecule while the existing atom-by-atom graph generative models typically requireO(|A|2) decisions. This implies that our generative model requires a smaller number of decisions for sparse graphs like molecules, i.e., when |B| is small. Furthermore, our work is the first to successfully train a Transformer architecture (Vaswani et al., 2017) for graph-based molecule generation.
In another line of research, several works (Jin et al., 2018; 2019; 2020) proposed generative models based on using the junction-tree representation with molecular substructures as building blocks. Based on such a representation, such works utilize tree-constructive operations to generate the full graph. Since they operate on such a coarse-grained molecular representation, they typically require a fewer number of building blocks to generate the whole molecule. In comparison, our STGG framework utilizes a more fine-grained molecular representation and may additionally learn the inner semantics of substructures that are used as building blocks for the junction tree.
5 EXPERIMENT
In this section, we report the experimental results of the proposed spanning tree-based graph generation (STGG) framework. In Section 5.1 and 5.2, we compare with the existing graph generative models in the ZINC250K (Irwin et al., 2012) and QM9 (Ramakrishnan et al., 2014). We provide ablation studies on each component of our method using the ZINC250K dataset. In Section 5.2, we compare with the existing molecule generative models using the MOSES benchmark (Polykovskiy et al., 2020). Finally, in Section 5.3, we provide our results on the molecular optimization task with respect to the penalized octanol-water partition coefficient function (PLOGP). We provide the implementation details and illustration of the generated molecules in Appendix D and E, respectively.
5.1 MOLECULE GENERATION ON ZINC250K AND QM9 DATASETS
We first compare to the literature standard for the molecular generation task in the ZINC250K and the QM9 datasets. To this end, we train our generative model on the respective datasets and sample 10,000 molecules to measure (a) the ratio of valid molecules (VALID), (b) the ratio of unique molecules (UNIQUE), and (c) the ratio of novel molecules with respect to the training dataset (NOVEL). We compare with the numbers reported by recently proposed graph generative models (Shi et al., 2020; Luo et al., 2021). We also provide an additional baseline of a transformer architecture trained to generate the SMILES representation for the molecule (SMILES-TRANSFORMER).
We mark CORRECTABLE for methods which can optionally use a sample-rejection scheme to forbid decisions that violate the chemical rules at evaluation. Note that our framework can train the generative model under the valence correction mask during training, while existing graph generative
models use the valence correction only at evaluation. However, for comparison, we do not use the valence correction mask during training in this experiment.
We report the experimental results in Table 2. In the table, we observe that our STGG framework outperforms all the existing molecular graph generative models for high VALID at the cost of relatively lower NOVEL. In particular, our generative model can achieve a 100% ratio of valid molecules in the QM9 dataset even without any correction procedure. Such a result highlights the how our model can effectively learn the chemical rules and model the underlying distribution.
Finally, our STGG framework performing better than the SMILES-based transformer implies how the performance of our generative model stems from the STGG framework, rather than using the Transformer architecture.
Ablation studies. We also conduct ablation studies on the ZINC250k dataset to verify the effectiveness of our method. To this end, we report the experimental results of our method without specific components. To be specific, we ablate the effects of using sequential relative positional encoding (S), treebased relative positional encoding (T), graph-construction mask (G), and valence rule mask (V). We also consider an additional baseline of using the absolute positional encoding (A) as in the original Transformer architecture (Vaswani et al., 2017). In Table 3 and Figure 4, one can observe how each component of
our algorithm is crucial for achieving high VALID. In particular, the tree encoding is essential for the performance, showing the importance of tree-based representation that we use in our model.
5.2 MOLECULE GENERATION ON THE MOSES BENCHMARK
We also compare our method on the MOSES benchmark with the existing models. The MOSES benchmark offers a large collection of metrics to access the overall quality of generated molecules. To be specific, in addition to VALID, UNIQUE, NOVEL, we consider internal diversity of molecules (INTDIV), ratio of samples being accepted to chemical filters (FILTERS), Frétchet ChemNet Distance (FCD), nearest neighborhood similarity (SNN), frament similarity (FRAG), and Scaffold similarity
(SCAF). The similarity metrics of FCD, SNN, FRAG, SCAF are measured with respect to the test dataset of molecules and the scaffolds extracted from them.
In Table 4 and 5, we provide our experimental result. Here, one can observe how our algorithm outperforms the existing works for 10 out of 15 metrics including FILTERS, FCD-TEST, FCD-TESTSF, SNN-TEST, SNN-TESTSF, FRAG-TEST, FRAG-TESTSF, and SCAF-TEST. This highlights the ability of our STGG framework to successfully learn the training distribution.
5.3 MOLECULAR OPTIMIZATION FOR PENALIZED OCTANOL-WATER PARTITION COEFFICIENT
Finally, we demonstrate the usefulness of our STGG framework for the task of molecular optimization. To this end, we consider the literature standard of maximizing the penalized octanol-water partition coefficient (PLOGP). However, several works (Gao & Coley, 2020; Coley, 2020) have noted how the existing algorithms on this benchmark may not be practical, since PLOGP is ill-defined as a scoring function for molecules; this scoring function may assign high values to “unrealistic” molecules that are unstable and hard to synthesize in practice.
To consider this aspect, we propose a new algorithm which can control the quality of molecules by trading off scores and realistic-ness of molecules. Using this algorithm, we demonstrate how our STGG is capable of generating both (a) high-scoring molecules and (b) realistic molecules with a reasonably high score. At a high-level, we train a conditional generative model pθ(m|γ) under the STGG framework with PLOGP as the condition γ. At the test time, we sample from a high value γ to obtain high-scoring molecules. Such an algorithm is inspired from the recent offline reinforcement learning algorithms (Schmidhuber, 2019; Kumar & Levine, 2020; Chen et al., 2021; Janner et al., 2021). We fully describe our molecular optimization algorithm in Appendix F.
In Table 5 and Figure 6, we report the result of our molecular optimization experiment. We provide additional illustrations of the generated molecules in Appendix G. Here, our STGG model is able to generate molecules with considerably high PLOGP scores outside the training distribution. Furthermore, in Figure 6 and Appendix G, one can observe how increasing γ gradually changes the optimized molecule from realistic structures to large, chain-like, and unrealistic structures.3 Given such results, one may conclude that our STGG combined with the offline optimization algorithm can successfully make a trade-off between high PLOGP and realistic-ness of the generated molecules. However, we also remark that our results do not imply our optimization results to be strictly better than the baselines; we believe it is necessary to develop and incorporate quantitative measures for realistic-ness of molecules to fairly evaluate the molecular optimization algorithms. We believe such a research to be a important future direction.
6 CONCLUSION
In this paper, we propose STGG which is the first spanning tree-based framework for the generation of molecules using the Transformer architecture. The key idea of using the spanning tree for graph generation applies to any graph type outside the molecules; we believe such an extension of our work to be both promising and interesting. We also propose an offline algorithm for molecular optimization which allows the trade-off between the high score and the realistic-ness of molecules. We leave more investigation of the newly proposed optimization algorithm as future work.
3This is in agreement with prior works (Shi et al., 2020; Ahn et al., 2020; Luo et al., 2021).
7 REPRODUCIBILITY STATEMENT
We provide explicit description of our algorithm in Algorithm 1, Appendix 2, 3, and 4. We list the hyper-parameters, the hardware used for the experiments, and the data-processing information in Appendix D. We provide illustrations of the molecules generated for the experiments in Figure 6, and Appendix E, G. We submit the full implementation of our STGG framework and the baselines used in our experiments as a supplementary material.
A EXTRACTING SEQUENCE OF DECISIONS FROM A MOLECULAR GRAPH
In this section, we explain our algorithm for finding a sequence of decisions to construct a given molecular graph G = (A,B, E). The high-level idea is to first perform a depth-first search on G to find a spanning tree T = (A,B, ET ) and the corresponding set of residual edges ER = E\ET . Then the algorithm traverses the spanning tree T according to the depth-first search tree while (a) allocating branch start and branch end for vertices with degree higher than two and (b) adding res atom and res bond operations for any vertex covered by a residual edge {a, b} ∈ ER. To this end, we utilize a stack Sdfs that stores the list of vertices in G and branching tokens {"(", ")"} to visit. At each iteration, an element i of the stack Sdfs is popped. If i is a vertex, the algorithm adds the corresponding decision for attach atom and attach bond operations. If the vertex has more than two successors with respect to the spanning tree T , the successors are inserted into the stack Sdfs with surrounding "(" and ")" tokens. If the vertex has only one successor, the successor is inserted into the stack without an additional operation. When the branching tokens {"(", ")"} are popped from the stack, the algorithm adds the corresponding decision value to the sequence of decisions. We describe the full scheme in Algorithm 2.
Algorithm 2 Generating sequence of decisions for a molecular graphs 1: Input: graph G = (A,B, E), atom attributes {xa}a∈A, and bond attributes {xb}b∈B 2: Find a spanning tree T = (A,B, ET ) of G based on depth-first order and set ER ← E \ ET . 3: Initialize an empty sequence of decisions D. 4: Choose the root a ∈ A of T to insert in an empty stack Sbranch. 5: do 6: Pop i from Sbranch. 7: if i ∈ A ∪ B then 8: Append xi to D. . Decision to attach the atom vertex. 9: for j ∈ {j|j ∈ N (i), {i, j} ∈ ER} do . Decisions for residual edges. 10: If i ∈ A, append "*" to D. 11: If i ∈ B, append j to D. 12: Let V denote {j|j ∈ N (i), j 6∈ AT ∪ BT }. . Successors of i in depth-first order. 13: If |V| > 1, insert "(" to Sbranch. . Allocate decision to record pointer vertex. 14: Insert vertices in V to Sbranch. . Allocate successors to visit later. 15: If |V| > 1, insert ")" to Sbranch. . Allocate decision to return to the pointer vertex. 16: if i ∈ {"(", ")"} then 17: Append i to D. 18: while |Sbranch| > 0 19: Output: sequence of decisions D = d1, . . . , dT to reconstruct G.
B ALGORITHMS FOR GRAPH MASKING
To determine whether if a sequence of decisions lead to a valid generation of a molecular graph, we propose an algorithm that outputs a set of valid decisions given the current decision d, stack of pointer vertices S , and list of atom vertices L during execution of Algorithhm 1. We provide the full description in Algorithm 3.
Algorithm 3 Determination of grammar violation 1: Input: current decision d, stack Sbranch, and list Lres. 2: Output: List of candidate decisions D that are valid. 3: if d ∈ Xatom then 4: Set D ← Xbond. . When the atom vertex is followed by bond vertex. 5: Set D ← D ∪ {"("} . When the atom vertex has more than one successors. 6: Set D ← D ∪ {"*"} . When the atom vertex has a neighboring residual edge. 7: If |Sbranch| > 0, set D ← D ∪ {")"}. . The ")" decision appears only when Sbranch is non-empty 8: Set D ← D ∪ {"[eos]"}. . Allow termination. 9: if d ∈ Xbond then 10: Set D ← Xatom. . When the bond vertex is followed by atom vertex. 11: Set D ← D ∪ Lres . When the bond vertex has a neighboring residual edge. 12: if d = "*" then 13: Set D ← Xbond. . When the atom vertex is followed by bond vertex. 14: Set D ← D ∪ {"("} . When the atom vertex has more than one successors. 15: Set D ← D ∪ {"*"} . When the atom vertex has a neighboring residual edge. 16: If |Sbranch| > 0, set D ← D ∪ {")"}. . The ")" decision appears only when Sbranch is non-empty 17: Set D ← D ∪ {"[eos]"}. . Allow termination. 18: if d ∈ Lres then 19: Set D ← {)}. . Residual edge is only constructed at end of each branch. 20: if d = "(" then 21: Set D ← Xbond. . Branch always starts with a bond vertex. 22: Set D ← D ∪ {"[eos]"}. . Allow termination. 23: if d = ")" then 24: Set D ← Xbond ∪ {"(", ")"}. . Branch is followed by start or end of another branch. 25: Set D ← D ∪ {"[eos]"}. . Allow termination.
We also establish theoretical result on how the sequence of decisions generated from Algorithm 3 is always a valid sequence of decisions for Algorithm 1. To this end, we define a valid molecular graph as follows.
Definition 1. A valid molecular graph G = (A,B, E) is a connected bipartite graph where the number of vertices adjacent to any bond vertex b ∈ B is exactly two, i.e., |N (b)| = 2.
Such a definition implies how a molecule should have exactly two atoms connected to a bond. Combined with additional conditions to guarantee the well-behavior of Algorithm 1 on sequence of decisions, we obtain the following result.
Theorem 1. Let G = (A,B, E), S , and L be a graph, a stack of vertices, and a list of vertices being updated by Algorithm 1 and a sequence of decisions d1, . . . , dT . If the sequence of decisions satisfies the criteria defined by Algorithm 3, the following properties are satisfied.
P1 At the t-th step of Algorithm 1, |S| > 0 if dt = ")".
P2 At the t-th step of Algorithm 1, dt ∈ L if dt ∈ A ∪ B.
P3 When dT = [eos], the graph G is a valid molecular graph.
Here, P1 and P2 implies how the operations in Algorithm 1 are well-defined for d1, . . . , dT .
Proof. First, P1 is enforced by the step in Algorithm 3 which forbids the decision value of ")" when the stack S is empty. Next, P2 is enforced by the step selecting decision values from the current list of vertices L. To enforce P3, when dT = [eos], G (a) has to be a connected bipartite graph and (b) the number of vertices adjacent to any bond vertex has to be exactly two. For (a), Algorithm 3 allows the decision of dt ∈ Xatom ∪ L only when the pointer vertex is a bond vertex, i.e., d ∈ Xbond. Similarly, dt ∈ Xbond is allowed only when d ∈ Xatom ∪ {"*", "("}. For (b), the algorithm does not allow adding a bond vertex b ∈ B to the list of vertices L, which is required for any vertex with degree higher than two. Termination is not allowed when there exists a bond vertex with degree smaller than two.
C ALGORITHM FOR VALENCE MASKING
To consider chemical validity of molecules, our framework offers the ability to constrain its generation on molecules that satisfy the valence rule for each atom. That is, the generated graph G = (A,B, E) satisfies the constraint v(xa) ≥ ∑ b∈N (a) o(xb) for every atom vertex a ∈ A where v(xa) denotes the valence of an atom type xa and o(xb) denotes the bond order.
To this end, we propose an algorithm which iteratively updates a record r(a) of available valence for each atom vertex a ∈ A. The key idea is to (a) update the record accordingly for each addition of atom and bond orders and (b) pre-allocate valence for the branch start and res atom operations by the amount of minimum bond order minx∈Xbond o(x). The second part (b) is required since the branch start and res atom operations indicate future bond vertices to be added as a neighbor of the current atom vertex. We provide the full description in Algorithm 4.
Algorithm 4 Determination of valence rule violation
1: Input: Intermediate tree T = (AT ,BT , ET ), current pointer vertex ipoint, previous pointer vertex ĩpoint, current decision d, previous decision d̃, and record r(·) of available valence.
2: Output: Newly updated r and the list D of decisions that violates the valence rule. 3: if d ∈ Xatom then 4: Set r(ipoint)← v(d). . Initialize record by atom valence. 5: Set r(ipoint)← r(ipoint)− o(d̃). . Update record using previously added bond vertex. 6: Set D ← {x|x ∈ Xbond, o(x) > r(ipoint)}. . Reject bond orders higher than the record. 7: if r(ipoint) < minx∈Xbond o(x) then 8: Set D ← D ∪ {"(", "*"}. . Reject decisions requiring minimal amount of valence. 9: if d ∈ Xbond then 10: if d̃ 6= "(" then 11: Set r(̃ipoint)← r(̃ipoint)− o(d). . Update record of previously added atom vertex. 12: else 13: Set r(̃ipoint)← r(̃ipoint)− o(d) + minx∈Xbond o(x).
. Update previously added atom vertex considering pre-allocated valence.
14: Set D ← {x|x ∈ Xatom, v(x) < r(ipoint))}. . Reject atom valence lower than bond order. 15: Set D ← D ∪ {x|x ∈ Lres, r(x) < o(d)−minx∈Xbond o(x)}.
. Reject residual edge candidates with valence lower than bond order.
16: if d = "(" then 17: Set r(ipoint)← r(ipoint)−minx∈Xbond o(x). . Pre-allocate minimum bond order. 18: Set D ← {x|x ∈ Xbond, o(x) > r(ipoint)}. . Reject bond orders higher than the record. 19: if d = ")" then 20: Set D ← ∅ 21: if d = "*" then 22: Set r(ipoint)← r(ipoint)−minx∈Xbond o(x). . Pre-allocate minimum bond order. 23: Set D ← {x|x ∈ Xbond, o(x) > r(ipoint)}. . Reject bond orders higher than the record. 24: if d ∈ Lres then 25: Set r(d)← r(d) + minx∈Xbond o(x)− o(xipoint). . Update record of previously added atom vertex. 26: Set D ← ∅.
Given Algorithm 3 and 4, we establish the following theoretical guarantee. Definition 2. A valid molecular graph G = (A,B, E) satisfies the valency rule if v(xa) ≥∑ b∈N (a) o(xb) for every atom vertex a ∈ A.
Theorem 2. Let G = (A,B, E) be a graph being updated by Algorithm 1 and a sequence of decisions d1, . . . , dT . If the sequence of decisions satisfies the criteria defined by Algorithm 3 and 4, the corresponding graph G satisfies the valency rule.
Proof. To prove the validity of our algorithms, we show that (1) r(a) ≤ v(xa)− ∑ b∈N (a) o(xb) and (2) r(a) > 0 at any time-step applying Algorithm 4 to the sequence of decisions d1, . . . , dT .
For (1), we note that a record of an atom a is initialized as v(xa)− ∑ b∈N (a) o(xb) whenever it is newly added to the graph G by a decision. Furthermore, whenever a new edge is added by attach bond and res bond operation, the corresponding bond order o(xb) is deducted from the record. Importantly, a minimum bond order minx∈Xbond o(x) is also added to the record for a attach bond operation consecutive to a branch start operation or a res bond operation. We note how this does not harm (1) since the minimum bond order has already been deducted (or pre-allocated) by the corresponding branch start and res atom operations, respectively.
For the case of (2), one can observe that Algorithm 4 filters out the atoms and bonds that will deduct the record of the corresponding atom to be negative. This completes proves the correctness of our theorem.
D IMPLEMENTATION DETAILS
In this section, we provide specific details on how we implement the STGG framework for our experiments.
Training detail. For all the experiments, we train the Transformer under STGG framework for 100 epochs with batch size of 128 for all the dataset. We use the AdamW (Loshchilov & Hutter, 2019) optimizer with constant learning rate of 10−4. We use three and six Transformer layers for {QM9, ZINC250K} and MOSES, respectively. The rest of Transformer-related configurations follow that of the original work (Vaswani et al., 2017); we use the attention module with embedding size of 1024 with eight heads, MLP with dimension of 2048, and dropout with probability of 0.1. Using a single Quadro RTX 6000 GPU, it takes approximately three, ten, and 96 hours to fully train the models on QM9, ZINC250K, and MOSES datasets, respectively.
Pre-processing. For the whole dataset, we use the following set of atom vocabularies Xatom: { "CH", "CH2", "CH-", "CH2-", "C", "N-", "NH-", "N", "NH", "N+", "NH+", "NH2+", "NH3+", "O-", "O", "O+", "OH+", "F", "P", "PH", "PH2", "P+", "PH+", "S-", "S", "S+", "SH", "SH+", "Cl", "Br", "I", }. Note that we assign different features for the same atom numbers with different number of explicit hydrogens and formal charges. This allows our algorithm to properly allocate maximum valence for each atom feature. Next, we use the bond vocabularyXbond = {"-", "=", "#"}, corresponding to bond orders of single, double, and triple, respectively. For explicit calculation of the atom valence during molecular construction, we train our models on kekulized molecules, i.e., aromatic bonds are fixed to single or double bonds.
E EXAMPLE OF GENERATED MOLECULES
F OFFLINE OPTIMIZATION OF MOLECULES
In this section, we describe our offline molecular optimization algorithm mainly inspired by existing works in offline reinforcement learning (Schmidhuber, 2019; Chen et al., 2021; Janner et al., 2021) and offline model-based optimization (Kumar & Levine, 2020).
For maximizing a reward function defined on a molecule, our algorithm consists of two simple steps. First, our offline optimization algorithm trains a conditional generative model pθ(m|γ) where m is the molecule and γ is the reward function evaluated on the offline dataset of molecules. Next, the reward-conditional generative model samples highly-rewarding molecules by generation conditioned on high values of γ. In particular, we set the value of γ to extrapolate outside the training dataset. Based on the highly expressive power of Transformer architecture, our algorithm can successfully generate highly-rewarding molecules.
G ADDITIONAL EXPERIMENTAL RESULTS ON MOLECULAR OPTIMIZATION
H COMPARISON WITH CG-VAE
In this section, we additionally compare our STGG framework with the CG-VAE model (Liu et al., 2018), which is another atom-by-atom graph generative model that allows masking out the action space to generate molecules satisfying the valence rules. Compared to Table 2, we additionally use the FCD, SNN, Frag, Scaf metrics to measure the faithfulness of the generative models for learning the underlying distribution of molecules. Note that the VALID metric used in Table 2 is insufficient to compare the faithfulness of STGG and CG-VAE since they both enjoy the guarantee to generate molecules satisfying the valence rule.
In Table 6 and 7, one can observe that our algorithm highly outperforms the CG-VAE in terms of faithfully learning the underlying distribution of molecules at the cost of relatively lower UNIQUE. For example, FCD score of our STGG for learning the ZINC dataset is 0.2775 while that of the CG-VAE is 11.33. This highlights the expressive power of our STGG framework. | 1. What is the main contribution of the paper regarding molecular graph construction?
2. What are the strengths of the proposed approach, particularly in its reasonability and clarification of existing work limitations?
3. What are the weaknesses of the paper, such as the lack of theoretical guarantees for generating valid molecules and the relationship to classical VAE+BO approaches?
4. Do you have any questions or concerns regarding the paper's content or methodology? | Summary Of The Paper
Review | Summary Of The Paper
This paper presents a method to construct a molecular graph, which is inspired by a spanning tree. In specific, a molecule is constructed by a sequence of actions, each of which adds an atom, adds a bond, starts/ends a branch, adds residual edges for a circular structure, and terminates. The generative process is controlled by a transformer-based neural network, which is specialized to tree construction procedure. The empirical studies show that the proposed method can learn the distribution of molecules comparably or better than the existing methods, and can be used to molecular optimization.
Review
Strengths
+ Reasonable approach to construct a molecule based on spanning-tree representation
While there are a number of approaches to generate a graph based on spanning-tree representation (i.e., SMILES), most of them treat a SMILES representation as a text and do not exploit the spanning tree. The method proposed in this paper explicitly uses the spanning-tree representation to construct a molecule, and it seems a very reasonable approach to construct a molecule, if inspired from SMILES.
+ Clarification on the known flaws in the penalized log P score
I appreciate the authors clarify the limitation of this benchmark task and propose an alternative evaluation scheme. As stated in Section 5.3, this benchmark does not serve as a proper benchmark task, but not many researchers do not discuss it. This statement is very important to the community.
Weaknesses
- Relationship to the existing work
I appreciate the authors summarize the relationship to the existing work in Section 4. I understand that the proposed generation procedure is different from the existing ones, but I don't understand why the difference is important. For example, the authors state that "Our STGG framework differs from this line of research since it proposes a new type of graph-based operations for generating the molecular graph", but do not clarify how it is different, and why the difference is important. Such a comparison is important for readers to understand the essence of the proposed method.
- No theoretical guarantee to generate valid molecules
I am not convinced with the mechanism to comply the valence rule, and I wonder if there is a theoretical guarantee that this mechanism can comply the rule or there is a counter-example where this mechanism cannot guarantee it. When constructing a ring, it is desirable that the tail atom has one remaining valence, and the ring closes by adding a residual edge. Is it possible that the tail atom has no remaining valence and the ring cannot be closed? For example, C*=C-C≡C seems not to be rejected by the mechanism, but we cannot close the ring. % This may be a question, rather than weakness. If there is any misunderstanding, please correct it.
- Relationship to the classical VAE+BO approaches
As discussed in Section 5.3, one of the major issues in the plogP optimization task is that unrealistic molecules can optimize the score. I consider there had been an implicit agreement that the optimized molecules should resemble the training data, which leads to the classical molecular optimization method combining VAE trained on the real-world molecules and Bayesian optimization. As far as I am aware of, the method by Kajino [Kajino, 19] achieves the best scores among the methods using this approach. While the proposed method can control the trade-off between the score and realisticness, it seems the proposed method is not better than VAE+BO approaches in this setting.
[Kajino, 19] Molecular Hypergraph Grammar with Its Application to Molecular Optimization, ICML-19.
Given the discussion below, all of my concerns have been addressed. |
ICLR | Title
Compressive Recovery Defense: A Defense Framework for $\ell_0, \ell_2$ and $\ell_\infty$ norm attacks.
Abstract
We provide recovery guarantees for compressible signals that have been corrupted with noise and extend the framework introduced in Bafna et al. (2018) to defend neural networks against `0, `2, and `∞-norm attacks. In the case of `0-norm noise, we provide recovery guarantees for Iterative Hard Thresholding (IHT) and Basis Pursuit (BP). For `2-norm bounded noise, we provide recovery guarantees for BP, and for the case of `∞-norm bounded noise, we provide recovery guarantees for a modified version of Dantzig Selector (DS). These guarantees theoretically bolster the defense framework introduced in Bafna et al. (2018) for defending neural networks against adversarial inputs. Finally, we experimentally demonstrate the effectiveness of this defense framework against an array of `0, `2 and `∞-norm attacks.
N/A
COMPRESSIVE RECOVERY DEFENSE: A DEFENSE FRAMEWORK FOR `0, `2, AND `∞ NORM ATTACKS.
Anonymous authors Paper under double-blind review
1 ABSTRACT
We provide recovery guarantees for compressible signals that have been corrupted with noise and extend the framework introduced in Bafna et al. (2018) to defend neural networks against `0, `2, and `∞-norm attacks. In the case of `0-norm noise, we provide recovery guarantees for Iterative Hard Thresholding (IHT) and Basis Pursuit (BP). For `2-norm bounded noise, we provide recovery guarantees for BP, and for the case of `∞-norm bounded noise, we provide recovery guarantees for a modified version of Dantzig Selector (DS). These guarantees theoretically bolster the defense framework introduced in Bafna et al. (2018) for defending neural networks against adversarial inputs. Finally, we experimentally demonstrate the effectiveness of this defense framework against an array of `0, `2 and `∞-norm attacks.
2 INTRODUCTION
Signal measurements are often corrupted by noise. The theory of compressive sensing (Candes et al. (2006)) allows us to retrieve the original signal from a corrupted measurement, under some structural assumptions on the measurement mechanism and the signal. Let us consider the class of machine learning problems where the inputs are compressible (i.e., approximately sparse) in some domain. For instance, images and audio signals are known to be compressible in their frequency domain and machine learning algorithms have been shown to perform exceedingly well on classification tasks that take such signals as input (Krizhevsky et al. (2012); Sutskever et al. (2014)). However, it was found in Szegedy et al. (2013) that neural networks can be easily forced into making incorrect predictions by adding adversarial perturbations to their inputs; see also Szegedy et al. (2014); Goodfellow et al. (2015); Papernot et al. (2016); Carlini & Wagner (2017). Further, the adversarial perturbations that led to incorrect predictions were shown to be very small (in either `0, `2, or `∞-norm) and often imperceptible to human beings. For this class of machine learning tasks, we show how to approximately recover original inputs from adversarial inputs and thus defend the neural network `0-norm, `2-norm and `∞-norm attacks.
In the case of `0-norm attacks on neural networks, the adversary can perturb a bounded number of coordinates in the input vector but has no restriction on how much each coordinate is perturbed in absolute value. In the case of `2-norm attacks, the adversary can perturb as many coordinates of the input vector as they choose as long as the `2-norm of the perturbation vector is bounded. Finally, in `∞-norm attacks, the adversary is only constrained by the amount of noise added to each coordinate of the input vector.
The contribution and structure of this paper is as follows. In Section 3.1, we describe the Compressive Recovery Defense (CRD) framework, a compressive-sensing-based framework for defending neural networks against adversarial inputs. This is essentially the same framework introduced in Bafna et al. (2018), though Bafna et al. (2018) considered only `0 attacks. In Section 3.2, we present the recovery algorithms which are used in the CRD framework to approximately recover original inputs from adversarial inputs. These algorithms include standard Basis Pursuit (BP), (k, t)-sparse Iterative Hard Thresholding (IHT) and Dantzig Selector (DS) with an additional constraint. In Section 3.3, we state recovery guarantees for the recovery algorithms in the presence of noise bounded in either `0, `2, or `∞-norm. The guarantees apply to arbitrary `0, `2, and `∞-norm attacks; they do not require prior knowledge of the adversary’s attack strategy. The recovery guarantees are proved rigorously in Appendix A. In Section 4, we experimentally demonstrate the performance of
the CRD framework in defending neural network classifiers on CIFAR-10, MNIST, and FashionMNIST datasets against state-of-the-art `0, `2 and `∞-norm attacks.
Notation. Let x be a vector in CN . Let S ⊆ {1, . . . , N} and S = {1, . . . , N} \ S. The cardinality of S is |S|. If A ∈ Cm×N is a matrix, then AS ∈ Cm×|S| is the column submatrix of A consisting of the columns indexed by S. We denote by xS either the sub-vector in CS consisting of the entries indexed by S or the vector in CN that is formed by starting with x and setting the entries indexed by S to zero. For example, if x = [4, 5,−9, 1]T and S = {1, 3}, then xS is either [4,−9]T or [4, 0,−9, 0]T . It will always be clear from context which meaning is intended. Note that, under the second meaning, xS = x − xS . The support of x, denoted by supp(x), is the set of indices of the non-zero entries of x, i.e., supp(x) = {i ∈ {1, . . . , N} : xi 6= 0}. The `0-quasinorm of x, denoted ‖x‖0, is defined to be the number of non-zero entries of x, i.e. ‖x‖0 = card(supp(x)). We say that x is k-sparse if ‖x‖0 ≤ k. We use xh(k) to denote a k-sparse vector in CN consisting of the k largest (in absolute value) entries of x with all other entries zero. For example, if x = [4, 5,−9, 1]T then xh(2) = [0, 5,−9, 0]T . Note that xh(k) may not be uniquely defined. In contexts where a unique meaning for xh(k) is needed, we can choose xh(k) out of all possible candidates according to a predefined rule (such as the lexicographic order). We also define xt(k) = x− xh(k). If x = [x1, x2]T ∈ C2n with x1, x2 ∈ Cn, and if x1 is k-sparse and x2 is t-sparse, then x is called (k, t)-sparse. We define xh(k,t) = [(x1)h(k), (x2)h(t)]T , which is a (k, t)-sparse vector in C2n.
3 THEORY
3.1 COMPRESSIVE RECOVERY DEFENSE (CRD)
Bafna et al. (2018) introduced a framework for defending machine learning classifiers against `0- attacks. We extend the framework to `2 and `∞ attacks. The defense framework is based on the theory of compressive sensing, so we call it Compressive Recovery Defense (CRD).
We explain the idea behind the CRD framework in the context of an image classifier. Suppose x ∈ Cn is a (flattened) image vector we wish to classify. But suppose an adversary perturbs x with a noise vector e ∈ Cn. We observe y = x + e, while x and e are unknown to us. Let F ∈ Cn×n be the Discrete Fourier Transform (DFT) matrix. The Fourier coefficients of x are x̂ = Fx. It is well-known that natural images are approximately sparse in the frequency domain. So we expect that x̂ is approximately sparse, meaning that x̂t(k) is small for some small k. We can write y = F−1x̂+ e (1) If ‖e‖2 ≤ η or ‖e‖∞ ≤ η, with η small (as in a `2 or `∞-attack), then we can use an appropriate sparse recovery algorithm with y and F−1 as input to compute a good approximation x# to x̂. Precise error bounds are given in Section 3.3. Then, since F is unitary, F−1x# will be a good approximation (i.e., reconstruction) of x = F−1x̂. So we can feed F−1x# into the classifier and expect to get the same classification as we would have for x. For an `0-attack where e is t-sparse, the approach is only slightly different. We set A = [F−1, I] and write
y = F−1x̂+ e = F−1x̂h(k) + e+ F −1x̂t(k) = A[x̂h(k), e] T + F−1x̂t(k), (2)
so that [x̂h(k), e]T is (k, t)-sparse. This structure lets us use a sparse recovery algorithm to compute a good approximation to x̂, as before. Note that the same idea can be applied with audio signals or other types of data instead of images. Moreover, the DFT can be replaced by any unitary transformation F for which x̂ = Fx is approximately sparse. For example, F may be the Cosine Transform, Sine Transform, Hadamard Transform, or another wavelet transform.
We now describe the training and testing procedure for CRD. For each training image x, we compute x̂h(k) = (Fx)h(k), and then compute the compressed the image x′ = F−1x̂h(k). We then add both x and x′ to the training set and train the network in the usual way. Given a (potentially adversarial) test image y, we first use a sparse recovery algorithm to compute an approximation x# to x̂, then we compute the reconstructed image y′ = F−1x# and feed it into the network for classification.
3.2 RECOVERY ALGORITHMS
We provide the recovery algorithms used in this section. For `0-attacks, we set A = [F−1, I] as in (2). Against `2 or `∞-attacks, we take A = F−1 as in (1).
Algorithm 1: (k, t)-Sparse Iterative Hard Thresholding (IHT) Procedure: IHT (y,A, k, t, T ) Input: y ∈ Cn, A ∈ Cn×2n, and positive integers k, t, T . x[0] = 0 for i := 0 to T do
x[i+1] = ( x[i] +A∗(y −Ax[i]) ) h(k,t)
return x# = x[T+1]
The IHT algorithm above is used to defend against `0-norm attacks. For such attacks, according to (2), the vector we need to recover is (k, t)-sparse. Thus this IHT is adapted to the structure of our problem as it uses the thresholding operation h(k,t) that produces (k, t)-sparse vectors. This structured IHT was first considered in Baraniuk et al. (2010). It gives better theoretical guarantees and practical performance in our CRD application than the standard IHT, which would instead use the thresholding operation h(k+t) that produces (k + t)-sparse vectors. For `2 or `∞ attacks, the recovery error for IHT would (in general) be larger due to the need to include a term for the `2 norm of the tail of the noise vector e. This, in turn, produces worse expected performance of the recovery defense. Therefore we only use Algorithm 1 for `0-norm attacks. We note that the results of Theorem 1 allow for values of k and t greater than or equal to Theorem 2.2. of Bafna et al. (2018).
Algorithm 2: Basis Pursuit (BP). Procedure: BP (y,A, η). Input: y ∈ Cm, A ∈ Cm×N , and η ≥ 0. x# = argminz∈CN ‖z‖1 subject to‖Az − y‖2 ≤ η return x#
We utilize BP for `0 and `2 norm attacks. In the `0 norm case, BP allows us to provide recovery guarantees for larger values of k and t than IHT. For instance, in the case of MNIST and FashionMNIST, IHT (equation (4) of Theorem 1) allows us to set k = 4 and t = 3, whereas BP (equation (7) of Theorem 2) allows us to set k = 8 and t = 8.
In the case of `2 norm attacks, BP is applied withA = F−1, a unitary matrix. As unitary matrices are isometries in `2 norm, BP provides good recovery guarantees for such matrices, and hence against `2 norm attacks.
Algorithm 3: Modified Dantzig Selector (DS). Procedure: DS(y,A, η). Input: y ∈ Cm, A ∈ Cm×N , and η ≥ 0. x# = argminz∈CN ‖z‖1 subject to ‖A∗(Az − y)‖∞ ≤ √ nη, ‖Az − y‖∞ ≤ η return x#
We utilize DS for `∞ norm attacks. The standard Dantzig Selector algorithm does not have the additional constraint ‖Az − y‖∞ ≤ η. Our modified Dantzig Selector includes this constraint for the following reason. In our application, A = F−1 and we want the reconstruction Ax# = F−1x# to be close to the original image x, so that they are classified identically. Thus, we want to the search space for x# to be restricted to those z ∈ CN such that ‖Az−x‖∞ is small. Note, for any z ∈ CN , ‖Az − x‖∞ ≤ ‖Az − y‖∞ + ‖x − y‖∞. In an `∞-attack, ‖x − y‖∞ = ‖e‖∞ is already small. Thus it suffices to require ‖Az − y‖∞ is small. We experimentally illustrate the improvement in reconstruction due to the additional constraint in Section 4.3 (Figure 4, Table 4).
Remarks on Reverse-Engineered Attacks. As observed in Bafna et al. (2018), x[0] in Algorithm 1, can be initialized randomly to defend against a reverse-engineered attack. In the case of Algorithm 2 and Algorithm 3, the minimization problems can be posed as semi-definite programming problems. If solved with interior point methods, one can use random initialization of the central path parameter and add randomness to the stopping criterion. This makes recovery non-deterministic and consequently non-trivial to create a reverse-engineered attack.
3.3 RECOVERY GUARANTEES
LetF ∈ Cn×n be a unitary matrix and I ∈ Cn×n be the identity matrix. DefineA = [F, I] ∈ Cn×2n and let y = A[x̂, e]T = Fx̂+ e, where x̂, e ∈ Cn. Let 1 ≤ k, t ≤ n be integers.
Theorem 1 (`0-norm IHT). Assume |Fij |2 ≤ cn and e is t-sparse. Let x [T+1] = IHT (y,A, k, t, T ) where x[T+1] = [ x̂[T+1], e[T+1] ]T ∈ C2n with x̂[T+1], e[T+1] ∈ Cn. Define ρ := √ 27 √
ckt n , τ(1− ρ) :=
√ 3 √ 1 + 2 √ ckt n . If 0 < ρ < 1, then:
‖x̂[T+1] − x̂h(k)‖2 ≤ ρ(T+1) √ ‖x̂h(k)‖22 + ‖e‖22 + τ‖x̂t(k)‖2 (3)
Moreover for any 0 < < 1 and any T ≥ ( log(1/ )+log( √ ‖x̂h(k)‖22+‖e‖22)
log(1/ρ)
) , we get:
‖x̂[T+1] − x̂h(k)‖2 ≤ τ‖x̂t(k)‖2 + (4)
Now define ρ := 2 √ 2 √
ckt n , τ(1− ρ) := 2. If 0 < ρ < 1, then:
‖x̂[T+1] − x̂h(k)‖2 ≤ ρ(T+1)‖x̂h(k)‖2 + τ(‖x̂t(k)‖2 + ‖e‖2) (5) Moreover for any 0 < < 1 and any T ≥ (
log(1/ )+log(‖x̂h(k)‖2) log(1/ρ)
) , we get:
‖x̂[T+1] − x̂h(k)‖2 ≤ τ(‖x̂t(k)‖2 + ‖e‖2) + (6)
Let us explain how to interpret the recovery guarantees provided by Theorem 1. The inequalities (3), (4), (5), (6) provide an upper bound on the size of ‖x̂[T+1] − x̂h(k)‖2. Since F is a unitary matrix, ‖x̂[T+1] − x̂h(k)‖2 equals ‖Fx̂[T+1] − Fx̂h(k)‖2, which is the difference between the reconstructed image Fx̂[T+1] and the compressed image Fx̂h(k) (which is a compressed version of the original image x). So the inequalities of Theorem 1 tell us how close the reconstructed image must be to the compressed image, and thus indicates how confident we should be that the classification of the reconstructed image will agree with the classification of the compressed image. In other words, the inequalities tell us how likely it is that the CRD scheme using IHT will be able to recover the correct class of the original image, and thus defend the classifier from the adversarial attack. The presence of the norm of the tail x̂t(k) in the upper bounds indicates that the CRD scheme should be more effective when the original image is closer to being perfectly k-sparse in the transformed basis. The ratio kt/n in the upper bounds (via ρ and τ ) suggests that smaller values of k and t relative to n (i.e., sparser transformed images x̂ and error vectors e) will lead to the CRD being more effective. The experiments in Section 4 will demonstrate these phenomena.
Let us compare Theorem 1 to the similar Theorem 2.2 of Bafna et al. (2018). We observe that (3) and (4) allow larger values of k and t than Theorem 2.2 of Bafna et al. (2018). This is because the authors of Bafna et al. (2018) prove their results using Theorem 4 of Baraniuk et al. (2010), which is more restrictive for the values of k and t. We do not use Theorem 4 of Baraniuk et al. (2010). Instead we use (a modified form of) Theorem 6.18 of Foucart & Rauhut (2017) to get (3) and (4). Both Theorem 4 of Baraniuk et al. (2010) (used by Bafna et al. (2018)) and Theorem 6.18 of Foucart & Rauhut (2017) (used by us here) take as input the Restricted Isometry Property (RIP) stated in Theorem 7. We and Bafna et al. (2018) both essentially prove the same RIP, although the proof methods are different. We use a standard Gershgorin disc theorem argument to bound eigenvalues, while Bafna et al. (2018) perform a direct estimation using the triangle inequality and AM-GM inequality.
We turn now to (5) and (6), which provide recovery guarantees for larger values of k and t than (3) and (4), at the expense of the extra error term ‖e‖2. Our proof of (5) and (6) is novel. It relies on explicitly expanding one iteration of IHT in matrix form and using the structure of the resulting matrix form to bound the approximation error at iteration T in terms of the error at iteration T − 2. We then use an inductive argument as in Theorem 6.18 of Foucart & Rauhut (2017) to get (5) and (6).
Next, we consider the recovery error for `0-norm bounded noise with BP instead of IHT. We note that since Algorithm 2 is not adapted to the (k, t)-sparse structure of vector to be recovered, we do not expect the guarantees to be particularly strong. However, providing bounds for BP is useful as there are cases when BP provides recovery guarantees for when recovering a larger number of coefficients (k) and a larger `0 noise budget (t) than IHT.
Theorem 2 (`0-norm BP). Assume |Fij |2 ≤ cn . Define
δk,t =
√ ckt
n , β =
√ max{k, t}c
n , θ =
√ k + t
(1− δk,t) β, τ =
√ 1 + δk,t
1− δk,t
If 0 < δk,t < 1 and 0 < θ < 1, then for x# = BP(y,A, ‖x̂t(k)‖2), we have the error bound ‖x̂# − x̂h(k)‖2 ≤ ( 2τ √ k + t
1− θ
( 1 +
β
1− δk,t
) + 2τ ) ‖x̂t(k)‖2 (7)
where we write x# = [x̂#, e#]T ∈ C2n with x̂#, e# ∈ Cn.
Note that the recovery error in (7) is O(( √ k + t)‖x̂t(k)‖2), which means that we should not expect recovery to be close when the attacker has a large `0 noise budget or when x̂ is not sparse. Also observe that the recovered vector x̂# is not necessarily k-sparse. The recovery error still captures the difference in the original image Fx̂ and the reconstructed image Fx̂#, where a smaller recovery error should once again indicate that our classifier would make the correct prediction. Our third result covers the case when the noise is bounded in `2-norm. Theorem 3 (`2-norm BP). If ‖e‖2 ≤ η, then for x# = BP(y, F, η), we have the error bound
‖x# − x̂‖1 ≤ 2 ( ‖x̂t(k)‖1 + 2 √ kη )
(8)
‖x# − x̂‖2 ≤ 2√ k ‖x̂t(k)‖1 + 6η (9)
Finally, we provide recovery guarantees when the noise is bounded in `∞-norm. Theorem 4 (`∞-norm DS). If ‖e‖∞ ≤ η, then for x# = DS(y, F, η), we have the error bound
‖x# − x̂‖1 ≤ 2 ( ‖x̂t(k)‖1 + 2k √ nη )
(10)
‖x# − x̂‖2 ≤ 2√ k ‖x̂t(k)‖1 + 6
√ knη (11)
The proofs of Theorem 3 and Theorem 4 are based on standard arguments in compressive sensing that rely on establishing the so-called robust null space property of the matrix. Note that the results of Theorem 3 and Theorem 4 also bound the norm difference of the original image Fx̂ and the reconstructed image Fx̂#, where x̂# has no sparsity guarantees. Next, observe that the results of Theorem 4 incur a factor of √ n in the error bounds due to the constraint ‖A∗(Az − y)‖∞ ≤ √ nη in Algorithm 3 which is required to prove the robust null space property. Finally, we note that the additional constraint added to Algorithm 3 does not affect the proof of Theorem 4.
3.4 RELATED WORK
The authors of Bafna et al. (2018) introduced the CRD framework which inspired this work. In fact, Theorem 2.2 of Bafna et al. (2018) also provides an approximation error bound for recovery via IHT. Note that a hypothesis t = O(n/k) has accidentally been dropped from their Theorem 2.2, though it appears in their Lemma 3.6. By making the implied constants explicit in the argument of Bafna et al. (2018), one sees that their Theorem 2.2 is essentially the same as (3) and (4) in Theorem 1 above. For more details, see the proof of Theorem 1 in Appendix A. Note that our recovery error bounds for IHT in (5) and (6) of Theorem 1 do not have analogs in Bafna et al. (2018). They hold for larger values of k and t at the expense of the additional error term ‖e‖2. Other works that provide guarantees include (Hein & Andriushchenko (2017)) and (Cisse et al. (2017)) where the authors frame the problem as one of regularizing the Lipschitz constant of a network and give a lower bound on the norm of the perturbation required to change the classifier decision. The authors of Sinha et al. (2017) use robust optimization to perturb the training data and provide a training procedure that updates parameters based on worst case perturbations. A similar approach to (Sinha et al. (2017)) is (Wong & Kolter (2017)) in which the authors use robust optimization to provide lower bounds on the norm of adversarial perturbations on the training data.
In Lecuyer et al. (2018), the authors use techniques from Differential Privacy (Dwork et al. (2014))in order to augment the training procedure of the classifier to improve robustness to adversarial inputs. Another approach using randomization is Li et al. (2018) in which the authors add i.i.d. Gaussian noise to the input and provide guarantees of maintaining classifier predictions as long as the `2-norm of the attack vector is bounded by a function that depends on the output of the classifier.
Most defenses against adversarial inputs do not come with theoretical guarantees. Instead, a large body of research has focused on finding practical ways to improve robustness to adversarial inputs by either augmenting the training data (Goodfellow et al. (2015)), using adversarial inputs from various networks (Tramèr et al. (2017)), or by reducing the dimensionality of the input (Xu et al. (2017)). For instance, Madry et al. (2017) use robust optimization to make the network robust to worst case adversarial perturbations on the training data. However, the effectiveness of their approach is determined by the amount and quality of training data available and its similarity to the distribution of the test data. An approach similar to ours but without any theoretical guarantees is (Samangouei et al. (2018)). In this work, the authors use Generative Adversarial Networks (GANs) to estimate the distribution of the training data and during inference, use a GAN to reconstruct a non-adversarial input that is most similar to a given test input. We now provide a brief overview on the field of compressive sensing.
Though some component ideas originated earlier in other fields, the field of compressive sensing was initiated with the work of Candès et al. (2006) and Donoho et al. (2006) in which the authors studied the problem of reconstructing sparse signals using only a small number of measurements with the choice of a random matrix. The reconstruction was performed using `1-minimization (i.e., Basis Pursuit) which was shown to produce sparse solutions even in presence of noise; see also Donoho & Elad (2003; 2006); Donoho & Huo (2001). Some of the earlier work in extending compressive sensing to perform stable recovery with deterministic matrices was done by Candes & Tao (2005) and Candes et al. (2006), where the authors showed that recovery of sparse vectors could be performed as long as the measurement matrix satisfied a restricted isometry hypothesis. Blumensath & Davies (2009) introduced IHT as an algorithm to recover sparse signals which was later modified in Baraniuk et al. (2010) to reduce the search space as long as the sparsity was structured. The standard DS algorithm was introduced in Candes et al. (2007) in order to perform stable recovery in the presence of `∞ noise.
4 EXPERIMENTS
All of our experiments are conducted on CIFAR-10 (Krizhevsky (2009)), MNIST (LeCun), and Fashion-MNIST (Xiao et al. (2017)) datasets with pixel values of each image normalized to lie in [0, 1]. Each experiment is conducted on a set of 1000 points sampled uniformly at random from the test set of the respective dataset. For every experiment, we use the Discrete Cosine Transform (DCT) and the Inverse Discrete Cosine Transform (IDCT) denoted by the matrices F ∈ Rn×n and FT ∈ Rn×n respectively. That is, for an adversarial image y ∈ R √ n× √ n, such that, y = x+e, we let x̂ = Fx, and x = FT x̂, where x, x̂ ∈ Rn and e ∈ Rn is the noise vector. For an adversarial image y ∈ R √ n× √ n×c, that contains c channels, we perform recovery on each channel independently by considering ym = xm + em, where x̂m = Fxm, xm = FT x̂m for m = 1, . . . , c. The value k denotes the number of largest (in absolute value) DCT coefficients used for reconstruction of each channel, and the value t denotes the `0 noise budget for each channel. We implement Algorithm 2 and Algorithm 3 using the open source library CVXPY (Diamond & Boyd (2016)).
We now outline the neural network architectures used for experiments in Section 4.1 and 4.2. For CIFAR-10, we use the network architecture of He et al. (2016) while the network architecture for MNIST and Fashion-MNIST datasets is provided in Table 5 of the Appendix. We train our networks using the Adam optimizer for CIFAR-10 and the AdaDelta optimizer for MNIST and FashionMNIST. In both cases, we use a cross-entropy loss function. We train the each neural network according to the CRD framework stated in Section 3.1. The code to reproduce our experiments is available here: https://github.com/anonymousiclrcompressive/iclr2020.
77.4% 0.0% 71.8%
4.1 DEFENSE AGAINST `0-NORM ATTACKS
This section is organized as follows: first we examine CRD against the One Pixel Attack (OPA) (Su et al. (2019)) for CIFAR-10. We only test the attack on CIFAR-10 as it is most effective against natural images and does not work well on MNIST or FASHION-MNIST. We note that this attack satisfies the theoretical constraints for t provided in Theorem 1, hence allowing us to test how well CRD works within existing guarantees. Once we establish the effectiveness of CRD against OPA, we then test it against two other `0-norm bounded attacks: Carlini and Wagner (CW) `0-norm attack (Carlini & Wagner (2017)) and the Jacobian based Saliency Map Attack (JSMA) (Papernot et al. (2016)).
4.1.1 ONE PIXEL ATTACK
We first resize all CIFAR-10 images to 125 × 125 × 3 while maintaining aspect ratios to ensure that the data falls under the hypotheses of Theorem 1 even for large values of k. The OPA attack perturbs exactly one pixel of the image, leading to an `0 noise budget of t = 3 per image. The `0 noise budget of t = 3 per image allows us to use k = 275 per channel. Table 1 shows that OPA is very effective against natural images and forces the network to mis-classify all previously correctly classified inputs.
We test the performance of CRD in two ways: a) reconstruction quality b) network performance on reconstructed images. In order to analyse the reconstruction quality of Algorithm 1, we do the following: for each test image, we use OPA to perturb the image and then use Algorithm 1 to approximate its largest (in absolute value) k = 275 DCT co-efficients. We then perform the IDCT on these recovered co-efficients to generate reconstructed images. We illustrate reconstruction on a randomly selected image from the test set in Figure 1.
Noting that Algorithm 1 leads to high quality reconstruction, we now test whether network accuracy improves on these reconstructed images. To do so, we feed these reconstructed images as input to the network and report its accuracy in Table 1. We note that network performance does indeed improve as network accuracy goes from 0.0% to 71.8% using Algorithm 1. Therefore, we conclude that CRD provides a substantial improvement in accuracy in against OPA.
4.1.2 CW-`0 ATTACK AND JSMA
Having established the effectiveness of CRD against OPA, we move onto the CW `0-norm attack and JSMA. We note that even when t is much larger than the hypotheses of Theorem 1 and Theorem 2, we find that Algorithms 1 and 2 are still able to defend the network. We hypothesize that this maybe related to the behavior of the RIP of a matrix for “most” vectors as opposed to the RIP for all vectors, and leave a more rigorous analysis for a follow up work.
We follow the procedure described in Section 4.1.1 to analyze the quality of reconstructions for Algorithm 1 and Algorithm 2 in Fig 2. In each case it can be seen that both algorithms provide high quality reconstructions for values of t that are well outside the hypotheses required by Theorem 1 and Theorem 2. We report these t values and the improvement in network performance on reconstructed adversarial images using CRD in Table 2.
4.2 DEFENSE AGAINST `2-NORM ATTACKS
In the case of `2-norm bounded attacks, we use the CW `2-norm attack (Carlini & Wagner (2017)) and the Deepfool attack (Moosavi-Dezfooli et al. (2016)) as they have been shown to be the most powerful. We note that Theorem 3 does not impose any restrictions on k or t and therefore the guarantees of equations (8) and (9) are applicable for recovery in all experiments of this section.
The reconstruction quality is shown in Figure 3. It can be noted that reconstruction using Algorithm 2 is of high quality for all three datasets. In order to check whether this high quality reconstruction also leads to improved performance in network accuracy, we test each network on reconstructed images using Algorithm 2. We report the results in Table 3 and note that Algorithm 2 provides a substantial improvement in network accuracy for each dataset and each attack method used.
4.3 DEFENSE AGAINST `∞-NORM ATTACKS
For `∞-norm bounded attacks, we use the BIM attack (Kurakin et al. (2016)) as it is has been shown to be very effective and also allows us to control the `∞-norm of the attack vector explicitly. We note that while the CW `∞-norm attack (Carlini & Wagner (2017)) has the ability to create attack vectors with `∞-norm less than or equal to BIM, it is computationally expensive and also does not allow one to pre-specify a value for the `∞-norm of an attack vector. Therefore, we limit our experimental analysis to the BIM attack. Note that for any attack vector e, ‖e‖2 ≤ √ n‖e‖∞ hence allowing `∞- norm attacks to create attack vectors with large `2-norm. Therefore, we could expect reconstruction quality and network accuracy to be lower when compared to `2-norm attacks.
In figure 4, we compare the reconstruction quality of images reconstructed with Algorithm 3 to those reconstructed using DS without the additional constraint. As can be noted from the figure, images reconstructed using DS without the additional constraint may not produce meaningful images. This is also reflected in Table 4, which shows that the accuracy of the network is roughly random on images reconstructed without the additional constraint.
We show examples of original images, adversarial images, and their reconstructions using Algorithm 3 in Figure 5. Finally, we report the network performance on reconstructed inputs using Algorithm 3 in Table 4 and also compare this to the performance on inputs reconstructed using DS without the additional constraint. We note that Algorithm 3 provides an increase in network performance against reconstructed adversarial inputs. However, the improvement in performance is not as substantial as it was against `0 or `2-norm attacks.
5 CONCLUSION
We provided recovery guarantees for corrupted signals in the case of `0-norm, `2-norm, and `∞- norm bounded noise. We were able to utilize these results in CRD and improve the performance of neural networks substantially in the case of `0-norm, `2-norm and `∞-norm bounded noise. While `0-norm attacks don’t always satisfy the constraints required by Theorem 1 and Theorem 2, we showed that CRD is still able to provide a good defense for values of t much larger than allowed in the guarantees. The guarantees of Theorem 3 and Theorem 4 were applicable in all experiments and CRD was shown to improve network performance for all attacks.
A APPENDIX
A.1 RESTRICTED ISOMETRY PROPERTY
We first establish the restricted isometry property for certain structured matrices. First, we give some definitions.
Definition 5. Let A be a matrix in Cm×N , let M ⊆ CN , and let δ ≥ 0. We say that A satisfies the M -restricted isometry property (or M-RIP) with constant δ if
(1− δ)‖x‖22 ≤ ‖Ax‖22 ≤ (1 + δ)‖x‖22 for all x ∈M . Definition 6. We define Mk to be the set of all k-sparse vectors in CN and similarly define Mk,t to be the set of (k, t)-sparse vectors in C2n. In other words, Mk,t is the following subset of C2n:
Mk,t = { x = [x1 x2] T ∈ C2n : x1 ∈ Cn, x2 ∈ Cn, ‖x1‖0 ≤ k, ‖x2‖0 ≤ t }
We define Sk,t to be the following collection of subsets of {1, . . . , 2n}:
Sk,t = {S1 ∪ S2 : S1 ⊆ {1, . . . , n} , S2 ⊆ {n+ 1, . . . , 2n} , card(S1) ≤ k, card(S2) ≤ t}
Note that Sk,t is the collection of supports of vectors in Mk,t.
Theorem 7. Let A = [F I] ∈ Cn×2n, where F ∈ Cn×n is a unitary matrix with |Fij |2 ≤ cn and I ∈ Cn×n is the identity matrix. Then(
1− √ ckt
n
) ‖x‖22 ≤ ‖Ax‖22 ≤ ( 1 + √ ckt
n
) ‖x‖22 (12)
for all x ∈Mk,t. In other words, A satisfies the Mk,t-RIP property with constant √ ckt
n .
Proof. In this proof, ifB denotes an matrix in Cn×n, then λ1(B), . . . , λn(B) denote the eigenvalues of B ordered so that |λ1(B)| ≤ · · · ≤ |λn(B)|. It suffices to fix an S = S1 ∪ S2 ∈ Sk,t and prove (12) for all non-zero x ∈ CS . SinceA∗SAS is normal, there is an orthonormal basis of eigenvectors u1, . . . , uk+t forA ∗ SAS , where
ui corresponds to the eigenvalue λi(A∗SAS). For any non-zero x ∈ CS , we have x = ∑k+t i=1 ciui for some ci ∈ C, so
‖Ax‖22 ‖x‖22 = 〈A∗SASx, x〉 〈x, x〉 =
∑k+t i=1 λi(A ∗ SAS)c
2 i∑k+t
i=1 c 2 i
. (13)
Thus it will suffice to prove that |λi(A∗SAS)− 1| ≤ √ ckt n for all i. Moreover,
|λi(A∗SAS)− 1| = |λi(A∗SAS − I)| = √ λi ((A∗SAS − I)∗(A∗SAS − I)) (14)
where the last equality holds because A∗SAS − I is normal. By combining (13) and (14), we see that (12) will hold upon showing that the eigenvalues of (A∗SAS − I)∗(A∗SAS − I) are bounded by ckt/n.
So far we have not used the structure ofA, but now we must. Observe that (A∗SAS−I)∗(A∗SAS−I) is a block diagonal matrix with two diagonal blocks of the formX∗X andXX∗. Therefore the three matrices (A∗SAS−I)∗(A∗SAS−I),X∗X , andXX∗ have the same non-zero eigenvalues. Moreover, X is simply the matrix FS1 with those rows not indexed by S2 deleted. The hypotheses on F imply that the entries of X∗X satisfy |(X∗X)ij | ≤ ctn . So the Gershgorin disc theorem implies that each eigenvalue λ of X∗X and (hence) of (A∗SAS − I)∗(A∗SAS − I) satisfies |λ| ≤ cktn .
A.2 ITERATIVE HARD THRESHOLDING
First we present Theorem 8 and then use it to prove Theorem 1.
Theorem 8. Let A ∈ Cn×2n be a matrix. Let 1 ≤ k, t ≤ n be positive integers and suppose δ3 is a M3k,3t-RIP constant for A and that δ2 is a M2k,2t-RIP constant for A. Let x ∈ C2n, r ∈ Cn, y = Ax + r, and S ∈ Sk,t. Letting x[T+1] = IHT (y,A, k, t, T ), if δ3 < 1/ √ 3, then we have the approximation error bound
‖x[T+1] − xS‖2 ≤ ρ(T+1)‖x[0] − xS‖2 + τ‖AxS + r‖2
where ρ := √ 3δ3 < 1 and (1− ρ)τ = √ 3 √ 1 + δ2 ≤ 2.18. Thus, the first term on the right goes to 0 as T goes to∞.
Theorem 8 is a modification of Theorem 6.18 of Foucart & Rauhut (2017). More specifically, Theorem 6.18 of Foucart & Rauhut (2017) considers M3k, M2k, and Sk in place of M3k,3t and M2k,2t and Sk,t and any dimension N in place of 2n. The proofs are very similar, so we omit the proof of Theorem 8. We will now prove a lemma that will be required for the proof of Theorem 1. For the proof of Lemma 9 and Theorem 1, we use the following convention: let A ∈ Cm×N be a matrix, then, we denote by (A)S , the m×N matrix that is obtained by starting with A and zeroing out the columns indexed by S. Note that (A)S = A− (A)S . Lemma 9. Let F ∈ Cn×n be a unitary matrix with |Fij |2 ≤ cn and let S ⊆ [n] be a index set with |S| = t. Then for any k-sparse vector z ∈ Cn, we have:
‖(F ∗)SFz‖22 ≤ ktc
n ‖z‖22
Proof of Lemma 9. First note that (F ∗)S ∈ Cn×n contains only t non-zero columns since |S| = t Therefore, we have |((F ∗)SF )ij | ≤ tcn since |Fij |
2 ≤ cn . Further, since the non-zero columns of (F ∗)S are orthogonal to each other, we get ((F ∗)S)∗(F ∗)S = (I)S , where I ∈ Cn×n is the identity matrix. Using this, we have for any w ∈ Cn,
‖(F ∗)SFw‖22 = 〈(F ∗)SFw, (F ∗)SFw〉 = 〈((F ∗)SF )∗(F ∗)SFw,w〉 = 〈(F ∗)SFw,w〉 = | 〈(F ∗)SFw,w〉 |
Now let V ⊆ [n] be any index set with cardinality k, that is |V | = k and let z ∈ Cn be any vector supported on V . We then get,
‖(F ∗)SFz‖22 = |〈(F ∗)SFz, z〉| = ∣∣∣∣∣∣ ∑ k∈V z∗k ∑ j∈V ((F ∗)SF )kjzj ∣∣∣∣∣∣ ≤ ∑ k∈V |z∗k| ∑ j∈V |((F ∗)SF )kj ||zj | ≤ ∑ k∈V |z∗k| tc n ∑ j∈V |zj |
= tc
n ‖z‖21 ≤
ktc
n ‖z‖22
where we use the fact that z is k-sparse for the last inequality.
Now we provide the proof for Theorem 1.
Proof of Theorem 1. Theorem 7 implies that the statement of Theorem 8 holds with δ3 = √ c·3k·3t n and δ2 = √ c·2k·2t n . Noting that y = A[x̂h(k) e]
T + Fx̂t(k), where [x̂h(k) e]T ∈ Mk,t, set x[T+1] = IHT (y,A, k, t, T ) and apply Theorem 8 with x = [x̂h(k) e]T , r = Fx̂t(k), and S = supp(x). Letting x[T+1] = [x̂[T+1] e[T+1]]T , use the facts that ‖x̂[T+1]− x̂h(k)‖2 ≤ ‖x[T+1]−
xS‖2 and ‖Fx̂t(k)‖2 = ‖x̂t(k)‖2. That will give (3). Letting T = ( log(1/ )+log( √ ‖x̂h(k)‖22+‖e‖22)
log(1/ρ)
) ,
gives ρT √ ‖x̂h(k)‖22 + ‖e‖22 ≤ , which can be substituted in (3) to get (4). Noting that ||e[T ]−e||2 ≤ τ ||x̂t(k)||2 + , we can use the same reasoning as used in Bafna et al. (2018) to get:
‖x̂[T+1] − x̂h(k)‖∞ ≤ √ 2ct
n
( τ‖x̂t(k)‖2 + ) (15)
‖x̂[T+1] − x̂h(k)‖2 ≤ √ 4ckt
n
( τ‖x̂t(k)‖2 + ) (16)
which are the essentially the same as the results of Theorem 2.2 in Bafna et al. (2018).
Now we prove (5). Write x[T ] = (z[T ])h(k,t), where z[T ] = x[T−1] + A∗(y − Ax[T−1]). Further, write z[T ] = [z[T ]1 z [T ] 2 ] T ∈ C2n, where z[T ]1 , z [T ] 2 ∈ Cn. Note that x̂[T ] = (z [T ] 1 )h(k). Therefore, we have z[T ]1 = F ∗(y − e[T−1]), where e[T−1] = (y − Fx̂[T−2])h(t). Now let S be the set of indices selected by the hard thresholding operation h(t) to get e[T−1]. Then observe that z[T ]1 = F ∗(y− (y−Fx̂[T−2])S). Next, note that ‖z[T ]1 − x̂[T ]‖22 ≤ ‖z [T ] 1 − x̂h(k)‖22 as x̂[T ] is a best k-sparse approximation to z[T ]1 . We can thus write,
‖(z[T ]1 − x̂h(k))− (x̂[T ] − x̂h(k))‖22 = ‖z [T ] 1 − x̂h(k)‖22 − 2Re〈z [T ] 1 − x̂h(k), x̂[T ] − x̂h(k)〉+ ‖x̂[T ] − x̂h(k)‖22
Therefore, we have,
‖x̂[T ] − x̂h(k)‖22 ≤ 2Re〈z [T ] 1 − x̂h(k), x̂[T ] − x̂h(k)〉
≤ 2|〈z[T ]1 − x̂h(k), x̂[T ] − x̂h(k)〉|
≤ 2‖z[T ]1 − x̂h(k)‖2‖x̂[T ] − x̂h(k)‖2
If ‖x̂[T ] − x̂h(k)‖2 > 0, then ‖x̂[T ] − x̂h(k)‖2 ≤ 2‖z [T ] 1 − x̂h(k)‖2. Now note that
z [T ] 1 = x̂+ F ∗e− F ∗(F (x̂− x̂[T−2]) + e)S = x̂+ F ∗e− (F ∗)S(F (x̂− x̂[T−2]) + e) = x̂+ (F ∗ − (F ∗)S)e− (F ∗)SF (x̂− x̂[T−2])
Using the fact that (F ∗)S = F ∗ − (F ∗)S , we can simplify the above to get:
‖z[T ]1 − x̂h(k)‖2 = ‖(F ∗)SFx̂t(k) + (F ∗)Se− (F ∗)SF (x̂h(k) − x̂[T−2])‖2 Therefore,
‖x̂[T ] − x̂h(k)‖2 ≤ 2 ( ‖(F ∗)SF‖2→2‖x̂t(k)‖2 + ‖(F ∗)S‖2→2‖e‖2 + ‖(F ∗)SF (x̂h(k) − x̂[T−2])‖2 ) ≤ 2 ( ‖x̂t(k)‖2 + ‖e‖2 ) + 2‖(F ∗)SF (x̂h(k) − x̂[T−2])‖2
where we use ‖(F ∗)S‖2→2 ≤ ‖F ∗‖2→2 = 1. Now since x̂h(k) − x̂[T−2] is 2k-sparse, we can use the result of Lemma 9 to get:
‖x̂[T ] − x̂h(k)‖2 ≤ 2 ( ‖x̂t(k)‖2 + ‖e‖2 ) + 2
(√ 2ktc
n
) ‖x̂[T−2] − x̂h(k)‖2
Now let ρ = 2 √ 2 √
ktc n , τ(1− ρ) = 2 and note that if ρ < 1, we can use induction on T to get (5). Then for any 0 < < 1 and any T ≥ (
log(1/ )+log(‖x̂h(k)‖2) log(1/ρ) ) , we have ρT (‖x̂h(k)‖2) ≤ which
gives us (6).
A.3 BASIS PURSUIT
Definition 10. The matrix A ∈ Cm×N satisfies the robust null space property with constants 0 < ρ < 1, τ > 0 and norm ‖ · ‖ if for every set S ⊆ [N ] with card(S) ≤ s and for every v ∈ CN we have ‖vS‖1 ≤ ρ‖vS‖1 + τ‖Av‖ Definition 11. The matrix A ∈ Cm×N satisfies the `q robust null space property of order s with constants 0 < ρ < 1, τ > 0 and norm ‖ · ‖ if for every set S ⊆ [N ] with card(S) ≤ s and for every v ∈ CN we have
‖vS‖q ≤ 1
s1−1/q ρ‖vS‖1 + τ‖Av‖
Note that if q = 1 then this is simply the robust null space property.
The proof of Theorem 2 requires the following theorem (whose full proof is given in the Foucart & Rauhut (2017)). Theorem 12 (Theorem 4.33 in Foucart & Rauhut (2017)). Let a1, . . . , aN be the columns of A ∈ Cm×N , let x ∈ CN with s largest absolute entries supported on S, and let y = Ax + e with ‖e‖2 ≤ η. For δ, β, γ, θ, τ ≥ 0 with δ < 1, assume that:
‖A∗SAS − I‖2→2 ≤ δ, max l∈S ‖A∗Sal‖2 ≤ β,
and that there exists a vector u = A∗h ∈ CN with h ∈ Cm such that ‖uS − sgn(xS)‖2 ≤ γ, ‖uS‖∞ ≤ θ, and ‖h‖2 ≤ τ √ s.
If ρ := θ + βγ(1−δ) < 1, then a minimizer x
# of ‖z‖1 subject to ‖Az − y‖2 ≤ η satisfies:
‖x# − x‖2 ≤ 2
(1− ρ)
( 1 +
β
(1− δ)
) ‖xS‖1 + ( 2(µγ + τ √ s)
1− ρ
( 1 + β
1− δ
) + 2µ ) η
where µ := √
1+δ 1−δ and sgn(x)i = 0, xi = 0 1, xi > 0
−1. xi < 0 .
Lemma 13. LetA ∈ Cn×2n, if ‖Ax‖22 ≤ (1+δ)‖x‖22 for all x ∈Mk,t, then, ‖A∗SAS−I‖2→2 ≤ δ, for any S ∈ Sk,t.
Proof. Let S ∈ Sk,t be given. Then for any x ∈ CS , we have
‖ASx‖22 − ‖x‖22 ≤ δ‖x‖22 We can re-write this as : ‖ASx‖22 − ‖x‖22 = 〈ASx,ASx〉 − 〈x, x〉 = 〈(A∗SAS − I)x, x〉. Noting that A∗SAS − I is Hermitian, we have:
‖A∗SAS − I‖2→2 = max x∈CS\{0} 〈(A∗SAS − I)x, x〉 ‖x‖22 ≤ δ
Proof of Theorem 2. We will derive (7) by showing that the matrix A satisfies all the hypotheses in Theorem 12 for every vector in Mk,t.
First note that by Theorem 7,A satisfies theMk,t-RIP property with constant δk,t := √ ckt n . Therefore, by Lemma 13, for any S ∈ Sk,t, we have ‖A∗SAS − I‖2→2 ≤ δk,t. Since A∗SAS is a positive semi-definite matrix, it has only non-negative eigenvalues that lie in the range [1 − δk,t, 1 + δk,t]. Since δk,t < 1 by assumption, A∗SAS is injective. Thus, we can set: h = AS(A ∗ SAS)
−1sgn(xS) and get:
‖h‖2 = ‖AS(A∗SAS)−1sgn(xS)‖2 ≤ ‖AS‖2→2‖(A∗SAS)−1‖2→2‖sgn(xS)‖2 ≤ τ √ k + t
where τ = √
1+δk,t 1−δk,t and we have used the following facts: since ‖A ∗ SAS − I‖2→2 ≤ δk,t < 1, we get that ‖(A∗SAS)−1‖2→2 ≤ 11−δk,t and that the largest singular value of AS is less than √ 1 + δk,t. Now let u = A∗h, then ‖uS − sgn(xS)‖2 = 0. Now we need to bound the value ‖uS‖∞. Denoting row j of A∗
S AS by the vector vj , we see that it has at most max{k, t} non-zero entries and that
|(vj)l|2 ≤ cn for l = 1, . . . , (k + t). Therefore, for any element (uS)j , we have:
|(uS)j | = |〈(A ∗ SAS) −1sgn(xS), (vj)∗〉| ≤ ‖(A∗SAS)−1‖2→2‖sgn(xS)‖2‖vj‖2 ≤ √ k + t
1− δk,t
√ max{k, t}c
n Defining β := √
max{k,t}c n and θ :=
√ k+t
1−δk,t β, we get ‖uS‖∞ ≤ θ < 1 and also observe that maxl∈S ‖A∗Sal‖2 ≤ β. Therefore, all the hypotheses of Theorem 12 have been satisfied. Note that y = Fx̂+ e = A[x̂h(k) e]
T + Fx̂t(k), Therefore, setting x# = BP(y,A, ‖x̂t(k)‖2), we use the fact ‖Fx̂t(k)‖2 = ‖x̂t(k)‖2 combined with the bound in Theorem 12 to get (7):
‖x̂# − x̂h(k)‖2 ≤ ( 2τ √ k + t
1− θ
( 1 +
β
1− δk,t
) + 2τ ) ‖x̂t(k)‖2
where we write x# = [x̂#, e#]T with x̂#, e# ∈ Cn.
We now focus on proving Theorem 3. In order to do so, we will need some lemmas that will be used in the main proof.
Lemma 14. If a matrix A ∈ Cm×N satisfies the `2 robust null space property for S ⊂ [N |, with card(S) = s, then it satisfies the `1 robust null space property for S with constants 0 < ρ < 1, τ ′ := τ √ s > 0.
Proof. For any v ∈ CN , ‖vS‖2 ≤ ρ√s‖vS̄‖1 + τ‖Av‖. Then, using the fact that ‖vS‖1 ≤ √ s‖vS‖2, we get:‖vS‖1 ≤ ρ‖vS̄‖1 + τ √ s‖Av‖.
Lemma 15 (Theorem 4.20 in Foucart & Rauhut (2017)). If a matrix A ∈ Cm×N satisfies the `1 robust null space property (with respect to ‖.‖) and for 0 < ρ < 1 and τ > 0 for S ⊂ [N |, then:
‖z − x‖1 ≤ 1 + ρ
1− ρ (‖z‖1 − ‖x‖1 + 2‖xS̄‖1) +
2τ
1− ρ ‖A(z − x)‖
for all z, x ∈ CN . Lemma 16 (Proposition 2.3 in Foucart & Rauhut (2017)). For any p > q > 0 and x ∈ Cn,
inf z∈Mk
‖x− z‖p ≤ 1
(k) 1 q− 1 p
‖x‖q
Proof of Theorem 3. Let 0 < ρ < 1 be arbitrary. Since F is a unitary matrix, for any S ⊆ [n] and v ∈ Cn, we have
‖vS‖2 ≤ ρ√ k ‖vS‖1 + τ‖v‖2 = ρ√ k ‖vS‖1 + τ‖Fv‖2 (17)
where τ = 1. Now let S ⊆ [n] such that card(S) ≤ k. Then, F satisfies the `2 robust null space property for S. Next, using Lemma 14 we get ‖vS‖1 ≤ ρ‖vS̄‖1 + τ √ k‖Fv‖2 for all v ∈ Cn. Now let x# = BP(y, F, η), then we know ‖x#‖1 ≤ ‖x̂‖1. Fixing S ⊆ [n] to be the support of x̂h(k) and using Lemma 15 , we get:
‖x# − x̂‖1 ≤ 1 + ρ
1− ρ (‖x#‖1 − ‖x̂‖1 + 2‖x̂t(k)‖1) +
2τ √ k 1− ρ ‖F (x# − x̂)‖2
≤ 1 + ρ 1− ρ
( 2‖x̂t(k)‖1 ) + 2τ √ k
1− ρ ‖F (x# − x̂)‖2
≤ 1 + ρ 1− ρ
( 2‖x̂t(k)‖1 ) + 4τ √ k
1− ρ ‖e‖2
≤ 1 + ρ 1− ρ
( 2‖x̂t(k)‖1 ) + 4τ √ k
1− ρ η
Letting ρ → 0 and recalling that τ = 1 gives (8). Now let S be the support of (x# − x̂)h(k). Note ‖(x# − x̂)S‖2 = infz∈Mk ‖(x# − x̂)− z‖2. Then, using Lemma 16 and (17), we see that
‖x# − x̂‖2 ≤ ‖(x# − x̂)S‖2 + ‖(x # − x̂)S‖2
≤ 1√ k ‖(x# − x̂)‖1 + ρ√ k ‖(x# − x̂)S‖1 + τ‖F (x # − x)‖2
≤ 1 + ρ√ k ‖(x# − x̂)‖1 + 2τη
≤ (1 + ρ) 2
√ k(1− ρ)
( 2‖x̂t(k)‖1 ) + 4τ(1 + ρ)
(1− ρ) η + 2τη
= (1 + ρ)2√ k(1− ρ)
( 2‖x̂t(k)‖1 ) +
( 4τ(1 + ρ)
(1− ρ) + 2τ
) η
Recalling τ = 1 and letting ρ→ 0 gives the desired result.
A.4 DANTZIG SELECTOR
Next we introduce the Dantzig Selector algorithm with an additional constraint. We first prove its recovery guarantees for `∞-norm and then explain the reasoning behind the additional constraint.
Proof of Theorem 4. The proof follows the same structure as the proof of Theorem 3. Therefore we provide a sketch and leave out the complete derivation. Let 0 < ρ < 1 be arbitrary. Since F is a unitary matrix, for any S ⊆ [n] and v ∈ Cn, we have
‖vS‖2 ≤ ρ√ k ‖vS‖1 + ‖vS‖2 ≤ ρ√ k ‖vS‖1 +
√ k‖v‖∞ =
ρ√ k ‖vS‖1 +
√ k‖F ∗Fv‖∞
The rest of the argument is the same as in the proof of Theorem 3. | 1. What is the focus of the paper regarding neural network classification models?
2. What are the strengths and weaknesses of the proposed algorithm in terms of its main idea and intuition?
3. What are the concerns regarding the experimental setup and attacker training?
4. How does the reviewer assess the novelty and effectiveness of the proposed approach?
5. Are there any additional comments or suggestions provided by the reviewer after the rebuttal? | Review | Review
The paper studies the problem of the robustness of the neural network-based classification models under adversarial attacks. The paper improves upon the known framework on defending against l_0, l_2 norm attackers.
The main idea of the algorithm is to use the "compress sensing" framework to preprocess the image: Using F, the discrete Fourier transformation matrix, and the algorithm tries to reproduce on every given input x, a vector y with the smallest number of non-zero coordinate such that Fy approximates x. The main algorithms proposed in this paper are sparse iterative hard thresholding (IHT) or base pursuit (BP) which are all quite simple and standardized.
The intuition of the approach is that l_0, l_2 attackers on the original input x can not allude the sparse vector y by too much, thus the recovered vector Fy could have better robustness property comparing to the original input x.
The main concern for me is the experiment in this paper. The author does not provide enough details about how the attacker is trained in their task. It seems that the authors only use the attacker trained on a standard neural network. However, since the authors have a preprocessing algorithm (IHT, BP) on top of the given input, the attacker should in principle tries to attack this pre-processing process as well. Since the pre-processing process is not differentiable, it is, therefore, unclear to me how to define the true robustness of the approach of the authors.
An analog of my argument is if we create an artificial network that has a pre-processing layer that zeros out most of the input pixel, however, if we train an attacker without this knowledge (so it tries to attack a network without this pre-processing), the l_2, l_0 attacker might not be very good for the true network.
After Rebuttal: I have read the authors' responses and acknowledge the sensibility of the statement. However, I still think the algorithm in this paper is merely a "clever" version of gradient masking, which does not give the neural networks real robustness, it is just harder to design attacks on all these discrete operations. |
ICLR | Title
Compressive Recovery Defense: A Defense Framework for $\ell_0, \ell_2$ and $\ell_\infty$ norm attacks.
Abstract
We provide recovery guarantees for compressible signals that have been corrupted with noise and extend the framework introduced in Bafna et al. (2018) to defend neural networks against `0, `2, and `∞-norm attacks. In the case of `0-norm noise, we provide recovery guarantees for Iterative Hard Thresholding (IHT) and Basis Pursuit (BP). For `2-norm bounded noise, we provide recovery guarantees for BP, and for the case of `∞-norm bounded noise, we provide recovery guarantees for a modified version of Dantzig Selector (DS). These guarantees theoretically bolster the defense framework introduced in Bafna et al. (2018) for defending neural networks against adversarial inputs. Finally, we experimentally demonstrate the effectiveness of this defense framework against an array of `0, `2 and `∞-norm attacks.
N/A
COMPRESSIVE RECOVERY DEFENSE: A DEFENSE FRAMEWORK FOR `0, `2, AND `∞ NORM ATTACKS.
Anonymous authors Paper under double-blind review
1 ABSTRACT
We provide recovery guarantees for compressible signals that have been corrupted with noise and extend the framework introduced in Bafna et al. (2018) to defend neural networks against `0, `2, and `∞-norm attacks. In the case of `0-norm noise, we provide recovery guarantees for Iterative Hard Thresholding (IHT) and Basis Pursuit (BP). For `2-norm bounded noise, we provide recovery guarantees for BP, and for the case of `∞-norm bounded noise, we provide recovery guarantees for a modified version of Dantzig Selector (DS). These guarantees theoretically bolster the defense framework introduced in Bafna et al. (2018) for defending neural networks against adversarial inputs. Finally, we experimentally demonstrate the effectiveness of this defense framework against an array of `0, `2 and `∞-norm attacks.
2 INTRODUCTION
Signal measurements are often corrupted by noise. The theory of compressive sensing (Candes et al. (2006)) allows us to retrieve the original signal from a corrupted measurement, under some structural assumptions on the measurement mechanism and the signal. Let us consider the class of machine learning problems where the inputs are compressible (i.e., approximately sparse) in some domain. For instance, images and audio signals are known to be compressible in their frequency domain and machine learning algorithms have been shown to perform exceedingly well on classification tasks that take such signals as input (Krizhevsky et al. (2012); Sutskever et al. (2014)). However, it was found in Szegedy et al. (2013) that neural networks can be easily forced into making incorrect predictions by adding adversarial perturbations to their inputs; see also Szegedy et al. (2014); Goodfellow et al. (2015); Papernot et al. (2016); Carlini & Wagner (2017). Further, the adversarial perturbations that led to incorrect predictions were shown to be very small (in either `0, `2, or `∞-norm) and often imperceptible to human beings. For this class of machine learning tasks, we show how to approximately recover original inputs from adversarial inputs and thus defend the neural network `0-norm, `2-norm and `∞-norm attacks.
In the case of `0-norm attacks on neural networks, the adversary can perturb a bounded number of coordinates in the input vector but has no restriction on how much each coordinate is perturbed in absolute value. In the case of `2-norm attacks, the adversary can perturb as many coordinates of the input vector as they choose as long as the `2-norm of the perturbation vector is bounded. Finally, in `∞-norm attacks, the adversary is only constrained by the amount of noise added to each coordinate of the input vector.
The contribution and structure of this paper is as follows. In Section 3.1, we describe the Compressive Recovery Defense (CRD) framework, a compressive-sensing-based framework for defending neural networks against adversarial inputs. This is essentially the same framework introduced in Bafna et al. (2018), though Bafna et al. (2018) considered only `0 attacks. In Section 3.2, we present the recovery algorithms which are used in the CRD framework to approximately recover original inputs from adversarial inputs. These algorithms include standard Basis Pursuit (BP), (k, t)-sparse Iterative Hard Thresholding (IHT) and Dantzig Selector (DS) with an additional constraint. In Section 3.3, we state recovery guarantees for the recovery algorithms in the presence of noise bounded in either `0, `2, or `∞-norm. The guarantees apply to arbitrary `0, `2, and `∞-norm attacks; they do not require prior knowledge of the adversary’s attack strategy. The recovery guarantees are proved rigorously in Appendix A. In Section 4, we experimentally demonstrate the performance of
the CRD framework in defending neural network classifiers on CIFAR-10, MNIST, and FashionMNIST datasets against state-of-the-art `0, `2 and `∞-norm attacks.
Notation. Let x be a vector in CN . Let S ⊆ {1, . . . , N} and S = {1, . . . , N} \ S. The cardinality of S is |S|. If A ∈ Cm×N is a matrix, then AS ∈ Cm×|S| is the column submatrix of A consisting of the columns indexed by S. We denote by xS either the sub-vector in CS consisting of the entries indexed by S or the vector in CN that is formed by starting with x and setting the entries indexed by S to zero. For example, if x = [4, 5,−9, 1]T and S = {1, 3}, then xS is either [4,−9]T or [4, 0,−9, 0]T . It will always be clear from context which meaning is intended. Note that, under the second meaning, xS = x − xS . The support of x, denoted by supp(x), is the set of indices of the non-zero entries of x, i.e., supp(x) = {i ∈ {1, . . . , N} : xi 6= 0}. The `0-quasinorm of x, denoted ‖x‖0, is defined to be the number of non-zero entries of x, i.e. ‖x‖0 = card(supp(x)). We say that x is k-sparse if ‖x‖0 ≤ k. We use xh(k) to denote a k-sparse vector in CN consisting of the k largest (in absolute value) entries of x with all other entries zero. For example, if x = [4, 5,−9, 1]T then xh(2) = [0, 5,−9, 0]T . Note that xh(k) may not be uniquely defined. In contexts where a unique meaning for xh(k) is needed, we can choose xh(k) out of all possible candidates according to a predefined rule (such as the lexicographic order). We also define xt(k) = x− xh(k). If x = [x1, x2]T ∈ C2n with x1, x2 ∈ Cn, and if x1 is k-sparse and x2 is t-sparse, then x is called (k, t)-sparse. We define xh(k,t) = [(x1)h(k), (x2)h(t)]T , which is a (k, t)-sparse vector in C2n.
3 THEORY
3.1 COMPRESSIVE RECOVERY DEFENSE (CRD)
Bafna et al. (2018) introduced a framework for defending machine learning classifiers against `0- attacks. We extend the framework to `2 and `∞ attacks. The defense framework is based on the theory of compressive sensing, so we call it Compressive Recovery Defense (CRD).
We explain the idea behind the CRD framework in the context of an image classifier. Suppose x ∈ Cn is a (flattened) image vector we wish to classify. But suppose an adversary perturbs x with a noise vector e ∈ Cn. We observe y = x + e, while x and e are unknown to us. Let F ∈ Cn×n be the Discrete Fourier Transform (DFT) matrix. The Fourier coefficients of x are x̂ = Fx. It is well-known that natural images are approximately sparse in the frequency domain. So we expect that x̂ is approximately sparse, meaning that x̂t(k) is small for some small k. We can write y = F−1x̂+ e (1) If ‖e‖2 ≤ η or ‖e‖∞ ≤ η, with η small (as in a `2 or `∞-attack), then we can use an appropriate sparse recovery algorithm with y and F−1 as input to compute a good approximation x# to x̂. Precise error bounds are given in Section 3.3. Then, since F is unitary, F−1x# will be a good approximation (i.e., reconstruction) of x = F−1x̂. So we can feed F−1x# into the classifier and expect to get the same classification as we would have for x. For an `0-attack where e is t-sparse, the approach is only slightly different. We set A = [F−1, I] and write
y = F−1x̂+ e = F−1x̂h(k) + e+ F −1x̂t(k) = A[x̂h(k), e] T + F−1x̂t(k), (2)
so that [x̂h(k), e]T is (k, t)-sparse. This structure lets us use a sparse recovery algorithm to compute a good approximation to x̂, as before. Note that the same idea can be applied with audio signals or other types of data instead of images. Moreover, the DFT can be replaced by any unitary transformation F for which x̂ = Fx is approximately sparse. For example, F may be the Cosine Transform, Sine Transform, Hadamard Transform, or another wavelet transform.
We now describe the training and testing procedure for CRD. For each training image x, we compute x̂h(k) = (Fx)h(k), and then compute the compressed the image x′ = F−1x̂h(k). We then add both x and x′ to the training set and train the network in the usual way. Given a (potentially adversarial) test image y, we first use a sparse recovery algorithm to compute an approximation x# to x̂, then we compute the reconstructed image y′ = F−1x# and feed it into the network for classification.
3.2 RECOVERY ALGORITHMS
We provide the recovery algorithms used in this section. For `0-attacks, we set A = [F−1, I] as in (2). Against `2 or `∞-attacks, we take A = F−1 as in (1).
Algorithm 1: (k, t)-Sparse Iterative Hard Thresholding (IHT) Procedure: IHT (y,A, k, t, T ) Input: y ∈ Cn, A ∈ Cn×2n, and positive integers k, t, T . x[0] = 0 for i := 0 to T do
x[i+1] = ( x[i] +A∗(y −Ax[i]) ) h(k,t)
return x# = x[T+1]
The IHT algorithm above is used to defend against `0-norm attacks. For such attacks, according to (2), the vector we need to recover is (k, t)-sparse. Thus this IHT is adapted to the structure of our problem as it uses the thresholding operation h(k,t) that produces (k, t)-sparse vectors. This structured IHT was first considered in Baraniuk et al. (2010). It gives better theoretical guarantees and practical performance in our CRD application than the standard IHT, which would instead use the thresholding operation h(k+t) that produces (k + t)-sparse vectors. For `2 or `∞ attacks, the recovery error for IHT would (in general) be larger due to the need to include a term for the `2 norm of the tail of the noise vector e. This, in turn, produces worse expected performance of the recovery defense. Therefore we only use Algorithm 1 for `0-norm attacks. We note that the results of Theorem 1 allow for values of k and t greater than or equal to Theorem 2.2. of Bafna et al. (2018).
Algorithm 2: Basis Pursuit (BP). Procedure: BP (y,A, η). Input: y ∈ Cm, A ∈ Cm×N , and η ≥ 0. x# = argminz∈CN ‖z‖1 subject to‖Az − y‖2 ≤ η return x#
We utilize BP for `0 and `2 norm attacks. In the `0 norm case, BP allows us to provide recovery guarantees for larger values of k and t than IHT. For instance, in the case of MNIST and FashionMNIST, IHT (equation (4) of Theorem 1) allows us to set k = 4 and t = 3, whereas BP (equation (7) of Theorem 2) allows us to set k = 8 and t = 8.
In the case of `2 norm attacks, BP is applied withA = F−1, a unitary matrix. As unitary matrices are isometries in `2 norm, BP provides good recovery guarantees for such matrices, and hence against `2 norm attacks.
Algorithm 3: Modified Dantzig Selector (DS). Procedure: DS(y,A, η). Input: y ∈ Cm, A ∈ Cm×N , and η ≥ 0. x# = argminz∈CN ‖z‖1 subject to ‖A∗(Az − y)‖∞ ≤ √ nη, ‖Az − y‖∞ ≤ η return x#
We utilize DS for `∞ norm attacks. The standard Dantzig Selector algorithm does not have the additional constraint ‖Az − y‖∞ ≤ η. Our modified Dantzig Selector includes this constraint for the following reason. In our application, A = F−1 and we want the reconstruction Ax# = F−1x# to be close to the original image x, so that they are classified identically. Thus, we want to the search space for x# to be restricted to those z ∈ CN such that ‖Az−x‖∞ is small. Note, for any z ∈ CN , ‖Az − x‖∞ ≤ ‖Az − y‖∞ + ‖x − y‖∞. In an `∞-attack, ‖x − y‖∞ = ‖e‖∞ is already small. Thus it suffices to require ‖Az − y‖∞ is small. We experimentally illustrate the improvement in reconstruction due to the additional constraint in Section 4.3 (Figure 4, Table 4).
Remarks on Reverse-Engineered Attacks. As observed in Bafna et al. (2018), x[0] in Algorithm 1, can be initialized randomly to defend against a reverse-engineered attack. In the case of Algorithm 2 and Algorithm 3, the minimization problems can be posed as semi-definite programming problems. If solved with interior point methods, one can use random initialization of the central path parameter and add randomness to the stopping criterion. This makes recovery non-deterministic and consequently non-trivial to create a reverse-engineered attack.
3.3 RECOVERY GUARANTEES
LetF ∈ Cn×n be a unitary matrix and I ∈ Cn×n be the identity matrix. DefineA = [F, I] ∈ Cn×2n and let y = A[x̂, e]T = Fx̂+ e, where x̂, e ∈ Cn. Let 1 ≤ k, t ≤ n be integers.
Theorem 1 (`0-norm IHT). Assume |Fij |2 ≤ cn and e is t-sparse. Let x [T+1] = IHT (y,A, k, t, T ) where x[T+1] = [ x̂[T+1], e[T+1] ]T ∈ C2n with x̂[T+1], e[T+1] ∈ Cn. Define ρ := √ 27 √
ckt n , τ(1− ρ) :=
√ 3 √ 1 + 2 √ ckt n . If 0 < ρ < 1, then:
‖x̂[T+1] − x̂h(k)‖2 ≤ ρ(T+1) √ ‖x̂h(k)‖22 + ‖e‖22 + τ‖x̂t(k)‖2 (3)
Moreover for any 0 < < 1 and any T ≥ ( log(1/ )+log( √ ‖x̂h(k)‖22+‖e‖22)
log(1/ρ)
) , we get:
‖x̂[T+1] − x̂h(k)‖2 ≤ τ‖x̂t(k)‖2 + (4)
Now define ρ := 2 √ 2 √
ckt n , τ(1− ρ) := 2. If 0 < ρ < 1, then:
‖x̂[T+1] − x̂h(k)‖2 ≤ ρ(T+1)‖x̂h(k)‖2 + τ(‖x̂t(k)‖2 + ‖e‖2) (5) Moreover for any 0 < < 1 and any T ≥ (
log(1/ )+log(‖x̂h(k)‖2) log(1/ρ)
) , we get:
‖x̂[T+1] − x̂h(k)‖2 ≤ τ(‖x̂t(k)‖2 + ‖e‖2) + (6)
Let us explain how to interpret the recovery guarantees provided by Theorem 1. The inequalities (3), (4), (5), (6) provide an upper bound on the size of ‖x̂[T+1] − x̂h(k)‖2. Since F is a unitary matrix, ‖x̂[T+1] − x̂h(k)‖2 equals ‖Fx̂[T+1] − Fx̂h(k)‖2, which is the difference between the reconstructed image Fx̂[T+1] and the compressed image Fx̂h(k) (which is a compressed version of the original image x). So the inequalities of Theorem 1 tell us how close the reconstructed image must be to the compressed image, and thus indicates how confident we should be that the classification of the reconstructed image will agree with the classification of the compressed image. In other words, the inequalities tell us how likely it is that the CRD scheme using IHT will be able to recover the correct class of the original image, and thus defend the classifier from the adversarial attack. The presence of the norm of the tail x̂t(k) in the upper bounds indicates that the CRD scheme should be more effective when the original image is closer to being perfectly k-sparse in the transformed basis. The ratio kt/n in the upper bounds (via ρ and τ ) suggests that smaller values of k and t relative to n (i.e., sparser transformed images x̂ and error vectors e) will lead to the CRD being more effective. The experiments in Section 4 will demonstrate these phenomena.
Let us compare Theorem 1 to the similar Theorem 2.2 of Bafna et al. (2018). We observe that (3) and (4) allow larger values of k and t than Theorem 2.2 of Bafna et al. (2018). This is because the authors of Bafna et al. (2018) prove their results using Theorem 4 of Baraniuk et al. (2010), which is more restrictive for the values of k and t. We do not use Theorem 4 of Baraniuk et al. (2010). Instead we use (a modified form of) Theorem 6.18 of Foucart & Rauhut (2017) to get (3) and (4). Both Theorem 4 of Baraniuk et al. (2010) (used by Bafna et al. (2018)) and Theorem 6.18 of Foucart & Rauhut (2017) (used by us here) take as input the Restricted Isometry Property (RIP) stated in Theorem 7. We and Bafna et al. (2018) both essentially prove the same RIP, although the proof methods are different. We use a standard Gershgorin disc theorem argument to bound eigenvalues, while Bafna et al. (2018) perform a direct estimation using the triangle inequality and AM-GM inequality.
We turn now to (5) and (6), which provide recovery guarantees for larger values of k and t than (3) and (4), at the expense of the extra error term ‖e‖2. Our proof of (5) and (6) is novel. It relies on explicitly expanding one iteration of IHT in matrix form and using the structure of the resulting matrix form to bound the approximation error at iteration T in terms of the error at iteration T − 2. We then use an inductive argument as in Theorem 6.18 of Foucart & Rauhut (2017) to get (5) and (6).
Next, we consider the recovery error for `0-norm bounded noise with BP instead of IHT. We note that since Algorithm 2 is not adapted to the (k, t)-sparse structure of vector to be recovered, we do not expect the guarantees to be particularly strong. However, providing bounds for BP is useful as there are cases when BP provides recovery guarantees for when recovering a larger number of coefficients (k) and a larger `0 noise budget (t) than IHT.
Theorem 2 (`0-norm BP). Assume |Fij |2 ≤ cn . Define
δk,t =
√ ckt
n , β =
√ max{k, t}c
n , θ =
√ k + t
(1− δk,t) β, τ =
√ 1 + δk,t
1− δk,t
If 0 < δk,t < 1 and 0 < θ < 1, then for x# = BP(y,A, ‖x̂t(k)‖2), we have the error bound ‖x̂# − x̂h(k)‖2 ≤ ( 2τ √ k + t
1− θ
( 1 +
β
1− δk,t
) + 2τ ) ‖x̂t(k)‖2 (7)
where we write x# = [x̂#, e#]T ∈ C2n with x̂#, e# ∈ Cn.
Note that the recovery error in (7) is O(( √ k + t)‖x̂t(k)‖2), which means that we should not expect recovery to be close when the attacker has a large `0 noise budget or when x̂ is not sparse. Also observe that the recovered vector x̂# is not necessarily k-sparse. The recovery error still captures the difference in the original image Fx̂ and the reconstructed image Fx̂#, where a smaller recovery error should once again indicate that our classifier would make the correct prediction. Our third result covers the case when the noise is bounded in `2-norm. Theorem 3 (`2-norm BP). If ‖e‖2 ≤ η, then for x# = BP(y, F, η), we have the error bound
‖x# − x̂‖1 ≤ 2 ( ‖x̂t(k)‖1 + 2 √ kη )
(8)
‖x# − x̂‖2 ≤ 2√ k ‖x̂t(k)‖1 + 6η (9)
Finally, we provide recovery guarantees when the noise is bounded in `∞-norm. Theorem 4 (`∞-norm DS). If ‖e‖∞ ≤ η, then for x# = DS(y, F, η), we have the error bound
‖x# − x̂‖1 ≤ 2 ( ‖x̂t(k)‖1 + 2k √ nη )
(10)
‖x# − x̂‖2 ≤ 2√ k ‖x̂t(k)‖1 + 6
√ knη (11)
The proofs of Theorem 3 and Theorem 4 are based on standard arguments in compressive sensing that rely on establishing the so-called robust null space property of the matrix. Note that the results of Theorem 3 and Theorem 4 also bound the norm difference of the original image Fx̂ and the reconstructed image Fx̂#, where x̂# has no sparsity guarantees. Next, observe that the results of Theorem 4 incur a factor of √ n in the error bounds due to the constraint ‖A∗(Az − y)‖∞ ≤ √ nη in Algorithm 3 which is required to prove the robust null space property. Finally, we note that the additional constraint added to Algorithm 3 does not affect the proof of Theorem 4.
3.4 RELATED WORK
The authors of Bafna et al. (2018) introduced the CRD framework which inspired this work. In fact, Theorem 2.2 of Bafna et al. (2018) also provides an approximation error bound for recovery via IHT. Note that a hypothesis t = O(n/k) has accidentally been dropped from their Theorem 2.2, though it appears in their Lemma 3.6. By making the implied constants explicit in the argument of Bafna et al. (2018), one sees that their Theorem 2.2 is essentially the same as (3) and (4) in Theorem 1 above. For more details, see the proof of Theorem 1 in Appendix A. Note that our recovery error bounds for IHT in (5) and (6) of Theorem 1 do not have analogs in Bafna et al. (2018). They hold for larger values of k and t at the expense of the additional error term ‖e‖2. Other works that provide guarantees include (Hein & Andriushchenko (2017)) and (Cisse et al. (2017)) where the authors frame the problem as one of regularizing the Lipschitz constant of a network and give a lower bound on the norm of the perturbation required to change the classifier decision. The authors of Sinha et al. (2017) use robust optimization to perturb the training data and provide a training procedure that updates parameters based on worst case perturbations. A similar approach to (Sinha et al. (2017)) is (Wong & Kolter (2017)) in which the authors use robust optimization to provide lower bounds on the norm of adversarial perturbations on the training data.
In Lecuyer et al. (2018), the authors use techniques from Differential Privacy (Dwork et al. (2014))in order to augment the training procedure of the classifier to improve robustness to adversarial inputs. Another approach using randomization is Li et al. (2018) in which the authors add i.i.d. Gaussian noise to the input and provide guarantees of maintaining classifier predictions as long as the `2-norm of the attack vector is bounded by a function that depends on the output of the classifier.
Most defenses against adversarial inputs do not come with theoretical guarantees. Instead, a large body of research has focused on finding practical ways to improve robustness to adversarial inputs by either augmenting the training data (Goodfellow et al. (2015)), using adversarial inputs from various networks (Tramèr et al. (2017)), or by reducing the dimensionality of the input (Xu et al. (2017)). For instance, Madry et al. (2017) use robust optimization to make the network robust to worst case adversarial perturbations on the training data. However, the effectiveness of their approach is determined by the amount and quality of training data available and its similarity to the distribution of the test data. An approach similar to ours but without any theoretical guarantees is (Samangouei et al. (2018)). In this work, the authors use Generative Adversarial Networks (GANs) to estimate the distribution of the training data and during inference, use a GAN to reconstruct a non-adversarial input that is most similar to a given test input. We now provide a brief overview on the field of compressive sensing.
Though some component ideas originated earlier in other fields, the field of compressive sensing was initiated with the work of Candès et al. (2006) and Donoho et al. (2006) in which the authors studied the problem of reconstructing sparse signals using only a small number of measurements with the choice of a random matrix. The reconstruction was performed using `1-minimization (i.e., Basis Pursuit) which was shown to produce sparse solutions even in presence of noise; see also Donoho & Elad (2003; 2006); Donoho & Huo (2001). Some of the earlier work in extending compressive sensing to perform stable recovery with deterministic matrices was done by Candes & Tao (2005) and Candes et al. (2006), where the authors showed that recovery of sparse vectors could be performed as long as the measurement matrix satisfied a restricted isometry hypothesis. Blumensath & Davies (2009) introduced IHT as an algorithm to recover sparse signals which was later modified in Baraniuk et al. (2010) to reduce the search space as long as the sparsity was structured. The standard DS algorithm was introduced in Candes et al. (2007) in order to perform stable recovery in the presence of `∞ noise.
4 EXPERIMENTS
All of our experiments are conducted on CIFAR-10 (Krizhevsky (2009)), MNIST (LeCun), and Fashion-MNIST (Xiao et al. (2017)) datasets with pixel values of each image normalized to lie in [0, 1]. Each experiment is conducted on a set of 1000 points sampled uniformly at random from the test set of the respective dataset. For every experiment, we use the Discrete Cosine Transform (DCT) and the Inverse Discrete Cosine Transform (IDCT) denoted by the matrices F ∈ Rn×n and FT ∈ Rn×n respectively. That is, for an adversarial image y ∈ R √ n× √ n, such that, y = x+e, we let x̂ = Fx, and x = FT x̂, where x, x̂ ∈ Rn and e ∈ Rn is the noise vector. For an adversarial image y ∈ R √ n× √ n×c, that contains c channels, we perform recovery on each channel independently by considering ym = xm + em, where x̂m = Fxm, xm = FT x̂m for m = 1, . . . , c. The value k denotes the number of largest (in absolute value) DCT coefficients used for reconstruction of each channel, and the value t denotes the `0 noise budget for each channel. We implement Algorithm 2 and Algorithm 3 using the open source library CVXPY (Diamond & Boyd (2016)).
We now outline the neural network architectures used for experiments in Section 4.1 and 4.2. For CIFAR-10, we use the network architecture of He et al. (2016) while the network architecture for MNIST and Fashion-MNIST datasets is provided in Table 5 of the Appendix. We train our networks using the Adam optimizer for CIFAR-10 and the AdaDelta optimizer for MNIST and FashionMNIST. In both cases, we use a cross-entropy loss function. We train the each neural network according to the CRD framework stated in Section 3.1. The code to reproduce our experiments is available here: https://github.com/anonymousiclrcompressive/iclr2020.
77.4% 0.0% 71.8%
4.1 DEFENSE AGAINST `0-NORM ATTACKS
This section is organized as follows: first we examine CRD against the One Pixel Attack (OPA) (Su et al. (2019)) for CIFAR-10. We only test the attack on CIFAR-10 as it is most effective against natural images and does not work well on MNIST or FASHION-MNIST. We note that this attack satisfies the theoretical constraints for t provided in Theorem 1, hence allowing us to test how well CRD works within existing guarantees. Once we establish the effectiveness of CRD against OPA, we then test it against two other `0-norm bounded attacks: Carlini and Wagner (CW) `0-norm attack (Carlini & Wagner (2017)) and the Jacobian based Saliency Map Attack (JSMA) (Papernot et al. (2016)).
4.1.1 ONE PIXEL ATTACK
We first resize all CIFAR-10 images to 125 × 125 × 3 while maintaining aspect ratios to ensure that the data falls under the hypotheses of Theorem 1 even for large values of k. The OPA attack perturbs exactly one pixel of the image, leading to an `0 noise budget of t = 3 per image. The `0 noise budget of t = 3 per image allows us to use k = 275 per channel. Table 1 shows that OPA is very effective against natural images and forces the network to mis-classify all previously correctly classified inputs.
We test the performance of CRD in two ways: a) reconstruction quality b) network performance on reconstructed images. In order to analyse the reconstruction quality of Algorithm 1, we do the following: for each test image, we use OPA to perturb the image and then use Algorithm 1 to approximate its largest (in absolute value) k = 275 DCT co-efficients. We then perform the IDCT on these recovered co-efficients to generate reconstructed images. We illustrate reconstruction on a randomly selected image from the test set in Figure 1.
Noting that Algorithm 1 leads to high quality reconstruction, we now test whether network accuracy improves on these reconstructed images. To do so, we feed these reconstructed images as input to the network and report its accuracy in Table 1. We note that network performance does indeed improve as network accuracy goes from 0.0% to 71.8% using Algorithm 1. Therefore, we conclude that CRD provides a substantial improvement in accuracy in against OPA.
4.1.2 CW-`0 ATTACK AND JSMA
Having established the effectiveness of CRD against OPA, we move onto the CW `0-norm attack and JSMA. We note that even when t is much larger than the hypotheses of Theorem 1 and Theorem 2, we find that Algorithms 1 and 2 are still able to defend the network. We hypothesize that this maybe related to the behavior of the RIP of a matrix for “most” vectors as opposed to the RIP for all vectors, and leave a more rigorous analysis for a follow up work.
We follow the procedure described in Section 4.1.1 to analyze the quality of reconstructions for Algorithm 1 and Algorithm 2 in Fig 2. In each case it can be seen that both algorithms provide high quality reconstructions for values of t that are well outside the hypotheses required by Theorem 1 and Theorem 2. We report these t values and the improvement in network performance on reconstructed adversarial images using CRD in Table 2.
4.2 DEFENSE AGAINST `2-NORM ATTACKS
In the case of `2-norm bounded attacks, we use the CW `2-norm attack (Carlini & Wagner (2017)) and the Deepfool attack (Moosavi-Dezfooli et al. (2016)) as they have been shown to be the most powerful. We note that Theorem 3 does not impose any restrictions on k or t and therefore the guarantees of equations (8) and (9) are applicable for recovery in all experiments of this section.
The reconstruction quality is shown in Figure 3. It can be noted that reconstruction using Algorithm 2 is of high quality for all three datasets. In order to check whether this high quality reconstruction also leads to improved performance in network accuracy, we test each network on reconstructed images using Algorithm 2. We report the results in Table 3 and note that Algorithm 2 provides a substantial improvement in network accuracy for each dataset and each attack method used.
4.3 DEFENSE AGAINST `∞-NORM ATTACKS
For `∞-norm bounded attacks, we use the BIM attack (Kurakin et al. (2016)) as it is has been shown to be very effective and also allows us to control the `∞-norm of the attack vector explicitly. We note that while the CW `∞-norm attack (Carlini & Wagner (2017)) has the ability to create attack vectors with `∞-norm less than or equal to BIM, it is computationally expensive and also does not allow one to pre-specify a value for the `∞-norm of an attack vector. Therefore, we limit our experimental analysis to the BIM attack. Note that for any attack vector e, ‖e‖2 ≤ √ n‖e‖∞ hence allowing `∞- norm attacks to create attack vectors with large `2-norm. Therefore, we could expect reconstruction quality and network accuracy to be lower when compared to `2-norm attacks.
In figure 4, we compare the reconstruction quality of images reconstructed with Algorithm 3 to those reconstructed using DS without the additional constraint. As can be noted from the figure, images reconstructed using DS without the additional constraint may not produce meaningful images. This is also reflected in Table 4, which shows that the accuracy of the network is roughly random on images reconstructed without the additional constraint.
We show examples of original images, adversarial images, and their reconstructions using Algorithm 3 in Figure 5. Finally, we report the network performance on reconstructed inputs using Algorithm 3 in Table 4 and also compare this to the performance on inputs reconstructed using DS without the additional constraint. We note that Algorithm 3 provides an increase in network performance against reconstructed adversarial inputs. However, the improvement in performance is not as substantial as it was against `0 or `2-norm attacks.
5 CONCLUSION
We provided recovery guarantees for corrupted signals in the case of `0-norm, `2-norm, and `∞- norm bounded noise. We were able to utilize these results in CRD and improve the performance of neural networks substantially in the case of `0-norm, `2-norm and `∞-norm bounded noise. While `0-norm attacks don’t always satisfy the constraints required by Theorem 1 and Theorem 2, we showed that CRD is still able to provide a good defense for values of t much larger than allowed in the guarantees. The guarantees of Theorem 3 and Theorem 4 were applicable in all experiments and CRD was shown to improve network performance for all attacks.
A APPENDIX
A.1 RESTRICTED ISOMETRY PROPERTY
We first establish the restricted isometry property for certain structured matrices. First, we give some definitions.
Definition 5. Let A be a matrix in Cm×N , let M ⊆ CN , and let δ ≥ 0. We say that A satisfies the M -restricted isometry property (or M-RIP) with constant δ if
(1− δ)‖x‖22 ≤ ‖Ax‖22 ≤ (1 + δ)‖x‖22 for all x ∈M . Definition 6. We define Mk to be the set of all k-sparse vectors in CN and similarly define Mk,t to be the set of (k, t)-sparse vectors in C2n. In other words, Mk,t is the following subset of C2n:
Mk,t = { x = [x1 x2] T ∈ C2n : x1 ∈ Cn, x2 ∈ Cn, ‖x1‖0 ≤ k, ‖x2‖0 ≤ t }
We define Sk,t to be the following collection of subsets of {1, . . . , 2n}:
Sk,t = {S1 ∪ S2 : S1 ⊆ {1, . . . , n} , S2 ⊆ {n+ 1, . . . , 2n} , card(S1) ≤ k, card(S2) ≤ t}
Note that Sk,t is the collection of supports of vectors in Mk,t.
Theorem 7. Let A = [F I] ∈ Cn×2n, where F ∈ Cn×n is a unitary matrix with |Fij |2 ≤ cn and I ∈ Cn×n is the identity matrix. Then(
1− √ ckt
n
) ‖x‖22 ≤ ‖Ax‖22 ≤ ( 1 + √ ckt
n
) ‖x‖22 (12)
for all x ∈Mk,t. In other words, A satisfies the Mk,t-RIP property with constant √ ckt
n .
Proof. In this proof, ifB denotes an matrix in Cn×n, then λ1(B), . . . , λn(B) denote the eigenvalues of B ordered so that |λ1(B)| ≤ · · · ≤ |λn(B)|. It suffices to fix an S = S1 ∪ S2 ∈ Sk,t and prove (12) for all non-zero x ∈ CS . SinceA∗SAS is normal, there is an orthonormal basis of eigenvectors u1, . . . , uk+t forA ∗ SAS , where
ui corresponds to the eigenvalue λi(A∗SAS). For any non-zero x ∈ CS , we have x = ∑k+t i=1 ciui for some ci ∈ C, so
‖Ax‖22 ‖x‖22 = 〈A∗SASx, x〉 〈x, x〉 =
∑k+t i=1 λi(A ∗ SAS)c
2 i∑k+t
i=1 c 2 i
. (13)
Thus it will suffice to prove that |λi(A∗SAS)− 1| ≤ √ ckt n for all i. Moreover,
|λi(A∗SAS)− 1| = |λi(A∗SAS − I)| = √ λi ((A∗SAS − I)∗(A∗SAS − I)) (14)
where the last equality holds because A∗SAS − I is normal. By combining (13) and (14), we see that (12) will hold upon showing that the eigenvalues of (A∗SAS − I)∗(A∗SAS − I) are bounded by ckt/n.
So far we have not used the structure ofA, but now we must. Observe that (A∗SAS−I)∗(A∗SAS−I) is a block diagonal matrix with two diagonal blocks of the formX∗X andXX∗. Therefore the three matrices (A∗SAS−I)∗(A∗SAS−I),X∗X , andXX∗ have the same non-zero eigenvalues. Moreover, X is simply the matrix FS1 with those rows not indexed by S2 deleted. The hypotheses on F imply that the entries of X∗X satisfy |(X∗X)ij | ≤ ctn . So the Gershgorin disc theorem implies that each eigenvalue λ of X∗X and (hence) of (A∗SAS − I)∗(A∗SAS − I) satisfies |λ| ≤ cktn .
A.2 ITERATIVE HARD THRESHOLDING
First we present Theorem 8 and then use it to prove Theorem 1.
Theorem 8. Let A ∈ Cn×2n be a matrix. Let 1 ≤ k, t ≤ n be positive integers and suppose δ3 is a M3k,3t-RIP constant for A and that δ2 is a M2k,2t-RIP constant for A. Let x ∈ C2n, r ∈ Cn, y = Ax + r, and S ∈ Sk,t. Letting x[T+1] = IHT (y,A, k, t, T ), if δ3 < 1/ √ 3, then we have the approximation error bound
‖x[T+1] − xS‖2 ≤ ρ(T+1)‖x[0] − xS‖2 + τ‖AxS + r‖2
where ρ := √ 3δ3 < 1 and (1− ρ)τ = √ 3 √ 1 + δ2 ≤ 2.18. Thus, the first term on the right goes to 0 as T goes to∞.
Theorem 8 is a modification of Theorem 6.18 of Foucart & Rauhut (2017). More specifically, Theorem 6.18 of Foucart & Rauhut (2017) considers M3k, M2k, and Sk in place of M3k,3t and M2k,2t and Sk,t and any dimension N in place of 2n. The proofs are very similar, so we omit the proof of Theorem 8. We will now prove a lemma that will be required for the proof of Theorem 1. For the proof of Lemma 9 and Theorem 1, we use the following convention: let A ∈ Cm×N be a matrix, then, we denote by (A)S , the m×N matrix that is obtained by starting with A and zeroing out the columns indexed by S. Note that (A)S = A− (A)S . Lemma 9. Let F ∈ Cn×n be a unitary matrix with |Fij |2 ≤ cn and let S ⊆ [n] be a index set with |S| = t. Then for any k-sparse vector z ∈ Cn, we have:
‖(F ∗)SFz‖22 ≤ ktc
n ‖z‖22
Proof of Lemma 9. First note that (F ∗)S ∈ Cn×n contains only t non-zero columns since |S| = t Therefore, we have |((F ∗)SF )ij | ≤ tcn since |Fij |
2 ≤ cn . Further, since the non-zero columns of (F ∗)S are orthogonal to each other, we get ((F ∗)S)∗(F ∗)S = (I)S , where I ∈ Cn×n is the identity matrix. Using this, we have for any w ∈ Cn,
‖(F ∗)SFw‖22 = 〈(F ∗)SFw, (F ∗)SFw〉 = 〈((F ∗)SF )∗(F ∗)SFw,w〉 = 〈(F ∗)SFw,w〉 = | 〈(F ∗)SFw,w〉 |
Now let V ⊆ [n] be any index set with cardinality k, that is |V | = k and let z ∈ Cn be any vector supported on V . We then get,
‖(F ∗)SFz‖22 = |〈(F ∗)SFz, z〉| = ∣∣∣∣∣∣ ∑ k∈V z∗k ∑ j∈V ((F ∗)SF )kjzj ∣∣∣∣∣∣ ≤ ∑ k∈V |z∗k| ∑ j∈V |((F ∗)SF )kj ||zj | ≤ ∑ k∈V |z∗k| tc n ∑ j∈V |zj |
= tc
n ‖z‖21 ≤
ktc
n ‖z‖22
where we use the fact that z is k-sparse for the last inequality.
Now we provide the proof for Theorem 1.
Proof of Theorem 1. Theorem 7 implies that the statement of Theorem 8 holds with δ3 = √ c·3k·3t n and δ2 = √ c·2k·2t n . Noting that y = A[x̂h(k) e]
T + Fx̂t(k), where [x̂h(k) e]T ∈ Mk,t, set x[T+1] = IHT (y,A, k, t, T ) and apply Theorem 8 with x = [x̂h(k) e]T , r = Fx̂t(k), and S = supp(x). Letting x[T+1] = [x̂[T+1] e[T+1]]T , use the facts that ‖x̂[T+1]− x̂h(k)‖2 ≤ ‖x[T+1]−
xS‖2 and ‖Fx̂t(k)‖2 = ‖x̂t(k)‖2. That will give (3). Letting T = ( log(1/ )+log( √ ‖x̂h(k)‖22+‖e‖22)
log(1/ρ)
) ,
gives ρT √ ‖x̂h(k)‖22 + ‖e‖22 ≤ , which can be substituted in (3) to get (4). Noting that ||e[T ]−e||2 ≤ τ ||x̂t(k)||2 + , we can use the same reasoning as used in Bafna et al. (2018) to get:
‖x̂[T+1] − x̂h(k)‖∞ ≤ √ 2ct
n
( τ‖x̂t(k)‖2 + ) (15)
‖x̂[T+1] − x̂h(k)‖2 ≤ √ 4ckt
n
( τ‖x̂t(k)‖2 + ) (16)
which are the essentially the same as the results of Theorem 2.2 in Bafna et al. (2018).
Now we prove (5). Write x[T ] = (z[T ])h(k,t), where z[T ] = x[T−1] + A∗(y − Ax[T−1]). Further, write z[T ] = [z[T ]1 z [T ] 2 ] T ∈ C2n, where z[T ]1 , z [T ] 2 ∈ Cn. Note that x̂[T ] = (z [T ] 1 )h(k). Therefore, we have z[T ]1 = F ∗(y − e[T−1]), where e[T−1] = (y − Fx̂[T−2])h(t). Now let S be the set of indices selected by the hard thresholding operation h(t) to get e[T−1]. Then observe that z[T ]1 = F ∗(y− (y−Fx̂[T−2])S). Next, note that ‖z[T ]1 − x̂[T ]‖22 ≤ ‖z [T ] 1 − x̂h(k)‖22 as x̂[T ] is a best k-sparse approximation to z[T ]1 . We can thus write,
‖(z[T ]1 − x̂h(k))− (x̂[T ] − x̂h(k))‖22 = ‖z [T ] 1 − x̂h(k)‖22 − 2Re〈z [T ] 1 − x̂h(k), x̂[T ] − x̂h(k)〉+ ‖x̂[T ] − x̂h(k)‖22
Therefore, we have,
‖x̂[T ] − x̂h(k)‖22 ≤ 2Re〈z [T ] 1 − x̂h(k), x̂[T ] − x̂h(k)〉
≤ 2|〈z[T ]1 − x̂h(k), x̂[T ] − x̂h(k)〉|
≤ 2‖z[T ]1 − x̂h(k)‖2‖x̂[T ] − x̂h(k)‖2
If ‖x̂[T ] − x̂h(k)‖2 > 0, then ‖x̂[T ] − x̂h(k)‖2 ≤ 2‖z [T ] 1 − x̂h(k)‖2. Now note that
z [T ] 1 = x̂+ F ∗e− F ∗(F (x̂− x̂[T−2]) + e)S = x̂+ F ∗e− (F ∗)S(F (x̂− x̂[T−2]) + e) = x̂+ (F ∗ − (F ∗)S)e− (F ∗)SF (x̂− x̂[T−2])
Using the fact that (F ∗)S = F ∗ − (F ∗)S , we can simplify the above to get:
‖z[T ]1 − x̂h(k)‖2 = ‖(F ∗)SFx̂t(k) + (F ∗)Se− (F ∗)SF (x̂h(k) − x̂[T−2])‖2 Therefore,
‖x̂[T ] − x̂h(k)‖2 ≤ 2 ( ‖(F ∗)SF‖2→2‖x̂t(k)‖2 + ‖(F ∗)S‖2→2‖e‖2 + ‖(F ∗)SF (x̂h(k) − x̂[T−2])‖2 ) ≤ 2 ( ‖x̂t(k)‖2 + ‖e‖2 ) + 2‖(F ∗)SF (x̂h(k) − x̂[T−2])‖2
where we use ‖(F ∗)S‖2→2 ≤ ‖F ∗‖2→2 = 1. Now since x̂h(k) − x̂[T−2] is 2k-sparse, we can use the result of Lemma 9 to get:
‖x̂[T ] − x̂h(k)‖2 ≤ 2 ( ‖x̂t(k)‖2 + ‖e‖2 ) + 2
(√ 2ktc
n
) ‖x̂[T−2] − x̂h(k)‖2
Now let ρ = 2 √ 2 √
ktc n , τ(1− ρ) = 2 and note that if ρ < 1, we can use induction on T to get (5). Then for any 0 < < 1 and any T ≥ (
log(1/ )+log(‖x̂h(k)‖2) log(1/ρ) ) , we have ρT (‖x̂h(k)‖2) ≤ which
gives us (6).
A.3 BASIS PURSUIT
Definition 10. The matrix A ∈ Cm×N satisfies the robust null space property with constants 0 < ρ < 1, τ > 0 and norm ‖ · ‖ if for every set S ⊆ [N ] with card(S) ≤ s and for every v ∈ CN we have ‖vS‖1 ≤ ρ‖vS‖1 + τ‖Av‖ Definition 11. The matrix A ∈ Cm×N satisfies the `q robust null space property of order s with constants 0 < ρ < 1, τ > 0 and norm ‖ · ‖ if for every set S ⊆ [N ] with card(S) ≤ s and for every v ∈ CN we have
‖vS‖q ≤ 1
s1−1/q ρ‖vS‖1 + τ‖Av‖
Note that if q = 1 then this is simply the robust null space property.
The proof of Theorem 2 requires the following theorem (whose full proof is given in the Foucart & Rauhut (2017)). Theorem 12 (Theorem 4.33 in Foucart & Rauhut (2017)). Let a1, . . . , aN be the columns of A ∈ Cm×N , let x ∈ CN with s largest absolute entries supported on S, and let y = Ax + e with ‖e‖2 ≤ η. For δ, β, γ, θ, τ ≥ 0 with δ < 1, assume that:
‖A∗SAS − I‖2→2 ≤ δ, max l∈S ‖A∗Sal‖2 ≤ β,
and that there exists a vector u = A∗h ∈ CN with h ∈ Cm such that ‖uS − sgn(xS)‖2 ≤ γ, ‖uS‖∞ ≤ θ, and ‖h‖2 ≤ τ √ s.
If ρ := θ + βγ(1−δ) < 1, then a minimizer x
# of ‖z‖1 subject to ‖Az − y‖2 ≤ η satisfies:
‖x# − x‖2 ≤ 2
(1− ρ)
( 1 +
β
(1− δ)
) ‖xS‖1 + ( 2(µγ + τ √ s)
1− ρ
( 1 + β
1− δ
) + 2µ ) η
where µ := √
1+δ 1−δ and sgn(x)i = 0, xi = 0 1, xi > 0
−1. xi < 0 .
Lemma 13. LetA ∈ Cn×2n, if ‖Ax‖22 ≤ (1+δ)‖x‖22 for all x ∈Mk,t, then, ‖A∗SAS−I‖2→2 ≤ δ, for any S ∈ Sk,t.
Proof. Let S ∈ Sk,t be given. Then for any x ∈ CS , we have
‖ASx‖22 − ‖x‖22 ≤ δ‖x‖22 We can re-write this as : ‖ASx‖22 − ‖x‖22 = 〈ASx,ASx〉 − 〈x, x〉 = 〈(A∗SAS − I)x, x〉. Noting that A∗SAS − I is Hermitian, we have:
‖A∗SAS − I‖2→2 = max x∈CS\{0} 〈(A∗SAS − I)x, x〉 ‖x‖22 ≤ δ
Proof of Theorem 2. We will derive (7) by showing that the matrix A satisfies all the hypotheses in Theorem 12 for every vector in Mk,t.
First note that by Theorem 7,A satisfies theMk,t-RIP property with constant δk,t := √ ckt n . Therefore, by Lemma 13, for any S ∈ Sk,t, we have ‖A∗SAS − I‖2→2 ≤ δk,t. Since A∗SAS is a positive semi-definite matrix, it has only non-negative eigenvalues that lie in the range [1 − δk,t, 1 + δk,t]. Since δk,t < 1 by assumption, A∗SAS is injective. Thus, we can set: h = AS(A ∗ SAS)
−1sgn(xS) and get:
‖h‖2 = ‖AS(A∗SAS)−1sgn(xS)‖2 ≤ ‖AS‖2→2‖(A∗SAS)−1‖2→2‖sgn(xS)‖2 ≤ τ √ k + t
where τ = √
1+δk,t 1−δk,t and we have used the following facts: since ‖A ∗ SAS − I‖2→2 ≤ δk,t < 1, we get that ‖(A∗SAS)−1‖2→2 ≤ 11−δk,t and that the largest singular value of AS is less than √ 1 + δk,t. Now let u = A∗h, then ‖uS − sgn(xS)‖2 = 0. Now we need to bound the value ‖uS‖∞. Denoting row j of A∗
S AS by the vector vj , we see that it has at most max{k, t} non-zero entries and that
|(vj)l|2 ≤ cn for l = 1, . . . , (k + t). Therefore, for any element (uS)j , we have:
|(uS)j | = |〈(A ∗ SAS) −1sgn(xS), (vj)∗〉| ≤ ‖(A∗SAS)−1‖2→2‖sgn(xS)‖2‖vj‖2 ≤ √ k + t
1− δk,t
√ max{k, t}c
n Defining β := √
max{k,t}c n and θ :=
√ k+t
1−δk,t β, we get ‖uS‖∞ ≤ θ < 1 and also observe that maxl∈S ‖A∗Sal‖2 ≤ β. Therefore, all the hypotheses of Theorem 12 have been satisfied. Note that y = Fx̂+ e = A[x̂h(k) e]
T + Fx̂t(k), Therefore, setting x# = BP(y,A, ‖x̂t(k)‖2), we use the fact ‖Fx̂t(k)‖2 = ‖x̂t(k)‖2 combined with the bound in Theorem 12 to get (7):
‖x̂# − x̂h(k)‖2 ≤ ( 2τ √ k + t
1− θ
( 1 +
β
1− δk,t
) + 2τ ) ‖x̂t(k)‖2
where we write x# = [x̂#, e#]T with x̂#, e# ∈ Cn.
We now focus on proving Theorem 3. In order to do so, we will need some lemmas that will be used in the main proof.
Lemma 14. If a matrix A ∈ Cm×N satisfies the `2 robust null space property for S ⊂ [N |, with card(S) = s, then it satisfies the `1 robust null space property for S with constants 0 < ρ < 1, τ ′ := τ √ s > 0.
Proof. For any v ∈ CN , ‖vS‖2 ≤ ρ√s‖vS̄‖1 + τ‖Av‖. Then, using the fact that ‖vS‖1 ≤ √ s‖vS‖2, we get:‖vS‖1 ≤ ρ‖vS̄‖1 + τ √ s‖Av‖.
Lemma 15 (Theorem 4.20 in Foucart & Rauhut (2017)). If a matrix A ∈ Cm×N satisfies the `1 robust null space property (with respect to ‖.‖) and for 0 < ρ < 1 and τ > 0 for S ⊂ [N |, then:
‖z − x‖1 ≤ 1 + ρ
1− ρ (‖z‖1 − ‖x‖1 + 2‖xS̄‖1) +
2τ
1− ρ ‖A(z − x)‖
for all z, x ∈ CN . Lemma 16 (Proposition 2.3 in Foucart & Rauhut (2017)). For any p > q > 0 and x ∈ Cn,
inf z∈Mk
‖x− z‖p ≤ 1
(k) 1 q− 1 p
‖x‖q
Proof of Theorem 3. Let 0 < ρ < 1 be arbitrary. Since F is a unitary matrix, for any S ⊆ [n] and v ∈ Cn, we have
‖vS‖2 ≤ ρ√ k ‖vS‖1 + τ‖v‖2 = ρ√ k ‖vS‖1 + τ‖Fv‖2 (17)
where τ = 1. Now let S ⊆ [n] such that card(S) ≤ k. Then, F satisfies the `2 robust null space property for S. Next, using Lemma 14 we get ‖vS‖1 ≤ ρ‖vS̄‖1 + τ √ k‖Fv‖2 for all v ∈ Cn. Now let x# = BP(y, F, η), then we know ‖x#‖1 ≤ ‖x̂‖1. Fixing S ⊆ [n] to be the support of x̂h(k) and using Lemma 15 , we get:
‖x# − x̂‖1 ≤ 1 + ρ
1− ρ (‖x#‖1 − ‖x̂‖1 + 2‖x̂t(k)‖1) +
2τ √ k 1− ρ ‖F (x# − x̂)‖2
≤ 1 + ρ 1− ρ
( 2‖x̂t(k)‖1 ) + 2τ √ k
1− ρ ‖F (x# − x̂)‖2
≤ 1 + ρ 1− ρ
( 2‖x̂t(k)‖1 ) + 4τ √ k
1− ρ ‖e‖2
≤ 1 + ρ 1− ρ
( 2‖x̂t(k)‖1 ) + 4τ √ k
1− ρ η
Letting ρ → 0 and recalling that τ = 1 gives (8). Now let S be the support of (x# − x̂)h(k). Note ‖(x# − x̂)S‖2 = infz∈Mk ‖(x# − x̂)− z‖2. Then, using Lemma 16 and (17), we see that
‖x# − x̂‖2 ≤ ‖(x# − x̂)S‖2 + ‖(x # − x̂)S‖2
≤ 1√ k ‖(x# − x̂)‖1 + ρ√ k ‖(x# − x̂)S‖1 + τ‖F (x # − x)‖2
≤ 1 + ρ√ k ‖(x# − x̂)‖1 + 2τη
≤ (1 + ρ) 2
√ k(1− ρ)
( 2‖x̂t(k)‖1 ) + 4τ(1 + ρ)
(1− ρ) η + 2τη
= (1 + ρ)2√ k(1− ρ)
( 2‖x̂t(k)‖1 ) +
( 4τ(1 + ρ)
(1− ρ) + 2τ
) η
Recalling τ = 1 and letting ρ→ 0 gives the desired result.
A.4 DANTZIG SELECTOR
Next we introduce the Dantzig Selector algorithm with an additional constraint. We first prove its recovery guarantees for `∞-norm and then explain the reasoning behind the additional constraint.
Proof of Theorem 4. The proof follows the same structure as the proof of Theorem 3. Therefore we provide a sketch and leave out the complete derivation. Let 0 < ρ < 1 be arbitrary. Since F is a unitary matrix, for any S ⊆ [n] and v ∈ Cn, we have
‖vS‖2 ≤ ρ√ k ‖vS‖1 + ‖vS‖2 ≤ ρ√ k ‖vS‖1 +
√ k‖v‖∞ =
ρ√ k ‖vS‖1 +
√ k‖F ∗Fv‖∞
The rest of the argument is the same as in the proof of Theorem 3. | 1. What is the focus of the paper, and what are the key contributions?
2. What are the strengths of the proposed approach, particularly in terms of its ability to handle l1 and l2 attacks?
3. What are the weaknesses of the paper, especially regarding the experimental results and the lack of discussion on the novelty and significance of the analysis?
4. Do you have any concerns about the applicability of the proposed methods in real-world scenarios?
5. What are the limitations of the paper, such as the choice of classical recovery algorithms and the lack of discussion on the technical challenges overcome? | Review | Review
This paper extends the compressive sensing framework introduced in Bafna et al. to handle l1 and l2 attacks. The authors provide theoretical analysis for several recovery algorithms (IHT, BP, DS) and provide experimental result on CIFAR-10, MNIST and Fashion-MNIST.
My major concern is how significant the provided results are. It is indeed interesting to extend the compressive sensing framework to handle l1 and l2 attacks. However, the proposed recovery algorithms are all classical ones, and it is unclear how novel the analysis is, since the authors do no discuss the technical challenges they overcome or the difference between their proof techniques and the previous ones. Also, it would be nice if the authors could discuss the theoretical results in more detail, e.g., how to interpret them and new insights it brings to us.
Moreover, some experimental details are missing. In last paragraph in Section 3.1, the authors say ``We then use both x and x′ to train the network''. How do you do so? Just add both x and x' to the training set? |
ICLR | Title
Compressive Recovery Defense: A Defense Framework for $\ell_0, \ell_2$ and $\ell_\infty$ norm attacks.
Abstract
We provide recovery guarantees for compressible signals that have been corrupted with noise and extend the framework introduced in Bafna et al. (2018) to defend neural networks against `0, `2, and `∞-norm attacks. In the case of `0-norm noise, we provide recovery guarantees for Iterative Hard Thresholding (IHT) and Basis Pursuit (BP). For `2-norm bounded noise, we provide recovery guarantees for BP, and for the case of `∞-norm bounded noise, we provide recovery guarantees for a modified version of Dantzig Selector (DS). These guarantees theoretically bolster the defense framework introduced in Bafna et al. (2018) for defending neural networks against adversarial inputs. Finally, we experimentally demonstrate the effectiveness of this defense framework against an array of `0, `2 and `∞-norm attacks.
N/A
COMPRESSIVE RECOVERY DEFENSE: A DEFENSE FRAMEWORK FOR `0, `2, AND `∞ NORM ATTACKS.
Anonymous authors Paper under double-blind review
1 ABSTRACT
We provide recovery guarantees for compressible signals that have been corrupted with noise and extend the framework introduced in Bafna et al. (2018) to defend neural networks against `0, `2, and `∞-norm attacks. In the case of `0-norm noise, we provide recovery guarantees for Iterative Hard Thresholding (IHT) and Basis Pursuit (BP). For `2-norm bounded noise, we provide recovery guarantees for BP, and for the case of `∞-norm bounded noise, we provide recovery guarantees for a modified version of Dantzig Selector (DS). These guarantees theoretically bolster the defense framework introduced in Bafna et al. (2018) for defending neural networks against adversarial inputs. Finally, we experimentally demonstrate the effectiveness of this defense framework against an array of `0, `2 and `∞-norm attacks.
2 INTRODUCTION
Signal measurements are often corrupted by noise. The theory of compressive sensing (Candes et al. (2006)) allows us to retrieve the original signal from a corrupted measurement, under some structural assumptions on the measurement mechanism and the signal. Let us consider the class of machine learning problems where the inputs are compressible (i.e., approximately sparse) in some domain. For instance, images and audio signals are known to be compressible in their frequency domain and machine learning algorithms have been shown to perform exceedingly well on classification tasks that take such signals as input (Krizhevsky et al. (2012); Sutskever et al. (2014)). However, it was found in Szegedy et al. (2013) that neural networks can be easily forced into making incorrect predictions by adding adversarial perturbations to their inputs; see also Szegedy et al. (2014); Goodfellow et al. (2015); Papernot et al. (2016); Carlini & Wagner (2017). Further, the adversarial perturbations that led to incorrect predictions were shown to be very small (in either `0, `2, or `∞-norm) and often imperceptible to human beings. For this class of machine learning tasks, we show how to approximately recover original inputs from adversarial inputs and thus defend the neural network `0-norm, `2-norm and `∞-norm attacks.
In the case of `0-norm attacks on neural networks, the adversary can perturb a bounded number of coordinates in the input vector but has no restriction on how much each coordinate is perturbed in absolute value. In the case of `2-norm attacks, the adversary can perturb as many coordinates of the input vector as they choose as long as the `2-norm of the perturbation vector is bounded. Finally, in `∞-norm attacks, the adversary is only constrained by the amount of noise added to each coordinate of the input vector.
The contribution and structure of this paper is as follows. In Section 3.1, we describe the Compressive Recovery Defense (CRD) framework, a compressive-sensing-based framework for defending neural networks against adversarial inputs. This is essentially the same framework introduced in Bafna et al. (2018), though Bafna et al. (2018) considered only `0 attacks. In Section 3.2, we present the recovery algorithms which are used in the CRD framework to approximately recover original inputs from adversarial inputs. These algorithms include standard Basis Pursuit (BP), (k, t)-sparse Iterative Hard Thresholding (IHT) and Dantzig Selector (DS) with an additional constraint. In Section 3.3, we state recovery guarantees for the recovery algorithms in the presence of noise bounded in either `0, `2, or `∞-norm. The guarantees apply to arbitrary `0, `2, and `∞-norm attacks; they do not require prior knowledge of the adversary’s attack strategy. The recovery guarantees are proved rigorously in Appendix A. In Section 4, we experimentally demonstrate the performance of
the CRD framework in defending neural network classifiers on CIFAR-10, MNIST, and FashionMNIST datasets against state-of-the-art `0, `2 and `∞-norm attacks.
Notation. Let x be a vector in CN . Let S ⊆ {1, . . . , N} and S = {1, . . . , N} \ S. The cardinality of S is |S|. If A ∈ Cm×N is a matrix, then AS ∈ Cm×|S| is the column submatrix of A consisting of the columns indexed by S. We denote by xS either the sub-vector in CS consisting of the entries indexed by S or the vector in CN that is formed by starting with x and setting the entries indexed by S to zero. For example, if x = [4, 5,−9, 1]T and S = {1, 3}, then xS is either [4,−9]T or [4, 0,−9, 0]T . It will always be clear from context which meaning is intended. Note that, under the second meaning, xS = x − xS . The support of x, denoted by supp(x), is the set of indices of the non-zero entries of x, i.e., supp(x) = {i ∈ {1, . . . , N} : xi 6= 0}. The `0-quasinorm of x, denoted ‖x‖0, is defined to be the number of non-zero entries of x, i.e. ‖x‖0 = card(supp(x)). We say that x is k-sparse if ‖x‖0 ≤ k. We use xh(k) to denote a k-sparse vector in CN consisting of the k largest (in absolute value) entries of x with all other entries zero. For example, if x = [4, 5,−9, 1]T then xh(2) = [0, 5,−9, 0]T . Note that xh(k) may not be uniquely defined. In contexts where a unique meaning for xh(k) is needed, we can choose xh(k) out of all possible candidates according to a predefined rule (such as the lexicographic order). We also define xt(k) = x− xh(k). If x = [x1, x2]T ∈ C2n with x1, x2 ∈ Cn, and if x1 is k-sparse and x2 is t-sparse, then x is called (k, t)-sparse. We define xh(k,t) = [(x1)h(k), (x2)h(t)]T , which is a (k, t)-sparse vector in C2n.
3 THEORY
3.1 COMPRESSIVE RECOVERY DEFENSE (CRD)
Bafna et al. (2018) introduced a framework for defending machine learning classifiers against `0- attacks. We extend the framework to `2 and `∞ attacks. The defense framework is based on the theory of compressive sensing, so we call it Compressive Recovery Defense (CRD).
We explain the idea behind the CRD framework in the context of an image classifier. Suppose x ∈ Cn is a (flattened) image vector we wish to classify. But suppose an adversary perturbs x with a noise vector e ∈ Cn. We observe y = x + e, while x and e are unknown to us. Let F ∈ Cn×n be the Discrete Fourier Transform (DFT) matrix. The Fourier coefficients of x are x̂ = Fx. It is well-known that natural images are approximately sparse in the frequency domain. So we expect that x̂ is approximately sparse, meaning that x̂t(k) is small for some small k. We can write y = F−1x̂+ e (1) If ‖e‖2 ≤ η or ‖e‖∞ ≤ η, with η small (as in a `2 or `∞-attack), then we can use an appropriate sparse recovery algorithm with y and F−1 as input to compute a good approximation x# to x̂. Precise error bounds are given in Section 3.3. Then, since F is unitary, F−1x# will be a good approximation (i.e., reconstruction) of x = F−1x̂. So we can feed F−1x# into the classifier and expect to get the same classification as we would have for x. For an `0-attack where e is t-sparse, the approach is only slightly different. We set A = [F−1, I] and write
y = F−1x̂+ e = F−1x̂h(k) + e+ F −1x̂t(k) = A[x̂h(k), e] T + F−1x̂t(k), (2)
so that [x̂h(k), e]T is (k, t)-sparse. This structure lets us use a sparse recovery algorithm to compute a good approximation to x̂, as before. Note that the same idea can be applied with audio signals or other types of data instead of images. Moreover, the DFT can be replaced by any unitary transformation F for which x̂ = Fx is approximately sparse. For example, F may be the Cosine Transform, Sine Transform, Hadamard Transform, or another wavelet transform.
We now describe the training and testing procedure for CRD. For each training image x, we compute x̂h(k) = (Fx)h(k), and then compute the compressed the image x′ = F−1x̂h(k). We then add both x and x′ to the training set and train the network in the usual way. Given a (potentially adversarial) test image y, we first use a sparse recovery algorithm to compute an approximation x# to x̂, then we compute the reconstructed image y′ = F−1x# and feed it into the network for classification.
3.2 RECOVERY ALGORITHMS
We provide the recovery algorithms used in this section. For `0-attacks, we set A = [F−1, I] as in (2). Against `2 or `∞-attacks, we take A = F−1 as in (1).
Algorithm 1: (k, t)-Sparse Iterative Hard Thresholding (IHT) Procedure: IHT (y,A, k, t, T ) Input: y ∈ Cn, A ∈ Cn×2n, and positive integers k, t, T . x[0] = 0 for i := 0 to T do
x[i+1] = ( x[i] +A∗(y −Ax[i]) ) h(k,t)
return x# = x[T+1]
The IHT algorithm above is used to defend against `0-norm attacks. For such attacks, according to (2), the vector we need to recover is (k, t)-sparse. Thus this IHT is adapted to the structure of our problem as it uses the thresholding operation h(k,t) that produces (k, t)-sparse vectors. This structured IHT was first considered in Baraniuk et al. (2010). It gives better theoretical guarantees and practical performance in our CRD application than the standard IHT, which would instead use the thresholding operation h(k+t) that produces (k + t)-sparse vectors. For `2 or `∞ attacks, the recovery error for IHT would (in general) be larger due to the need to include a term for the `2 norm of the tail of the noise vector e. This, in turn, produces worse expected performance of the recovery defense. Therefore we only use Algorithm 1 for `0-norm attacks. We note that the results of Theorem 1 allow for values of k and t greater than or equal to Theorem 2.2. of Bafna et al. (2018).
Algorithm 2: Basis Pursuit (BP). Procedure: BP (y,A, η). Input: y ∈ Cm, A ∈ Cm×N , and η ≥ 0. x# = argminz∈CN ‖z‖1 subject to‖Az − y‖2 ≤ η return x#
We utilize BP for `0 and `2 norm attacks. In the `0 norm case, BP allows us to provide recovery guarantees for larger values of k and t than IHT. For instance, in the case of MNIST and FashionMNIST, IHT (equation (4) of Theorem 1) allows us to set k = 4 and t = 3, whereas BP (equation (7) of Theorem 2) allows us to set k = 8 and t = 8.
In the case of `2 norm attacks, BP is applied withA = F−1, a unitary matrix. As unitary matrices are isometries in `2 norm, BP provides good recovery guarantees for such matrices, and hence against `2 norm attacks.
Algorithm 3: Modified Dantzig Selector (DS). Procedure: DS(y,A, η). Input: y ∈ Cm, A ∈ Cm×N , and η ≥ 0. x# = argminz∈CN ‖z‖1 subject to ‖A∗(Az − y)‖∞ ≤ √ nη, ‖Az − y‖∞ ≤ η return x#
We utilize DS for `∞ norm attacks. The standard Dantzig Selector algorithm does not have the additional constraint ‖Az − y‖∞ ≤ η. Our modified Dantzig Selector includes this constraint for the following reason. In our application, A = F−1 and we want the reconstruction Ax# = F−1x# to be close to the original image x, so that they are classified identically. Thus, we want to the search space for x# to be restricted to those z ∈ CN such that ‖Az−x‖∞ is small. Note, for any z ∈ CN , ‖Az − x‖∞ ≤ ‖Az − y‖∞ + ‖x − y‖∞. In an `∞-attack, ‖x − y‖∞ = ‖e‖∞ is already small. Thus it suffices to require ‖Az − y‖∞ is small. We experimentally illustrate the improvement in reconstruction due to the additional constraint in Section 4.3 (Figure 4, Table 4).
Remarks on Reverse-Engineered Attacks. As observed in Bafna et al. (2018), x[0] in Algorithm 1, can be initialized randomly to defend against a reverse-engineered attack. In the case of Algorithm 2 and Algorithm 3, the minimization problems can be posed as semi-definite programming problems. If solved with interior point methods, one can use random initialization of the central path parameter and add randomness to the stopping criterion. This makes recovery non-deterministic and consequently non-trivial to create a reverse-engineered attack.
3.3 RECOVERY GUARANTEES
LetF ∈ Cn×n be a unitary matrix and I ∈ Cn×n be the identity matrix. DefineA = [F, I] ∈ Cn×2n and let y = A[x̂, e]T = Fx̂+ e, where x̂, e ∈ Cn. Let 1 ≤ k, t ≤ n be integers.
Theorem 1 (`0-norm IHT). Assume |Fij |2 ≤ cn and e is t-sparse. Let x [T+1] = IHT (y,A, k, t, T ) where x[T+1] = [ x̂[T+1], e[T+1] ]T ∈ C2n with x̂[T+1], e[T+1] ∈ Cn. Define ρ := √ 27 √
ckt n , τ(1− ρ) :=
√ 3 √ 1 + 2 √ ckt n . If 0 < ρ < 1, then:
‖x̂[T+1] − x̂h(k)‖2 ≤ ρ(T+1) √ ‖x̂h(k)‖22 + ‖e‖22 + τ‖x̂t(k)‖2 (3)
Moreover for any 0 < < 1 and any T ≥ ( log(1/ )+log( √ ‖x̂h(k)‖22+‖e‖22)
log(1/ρ)
) , we get:
‖x̂[T+1] − x̂h(k)‖2 ≤ τ‖x̂t(k)‖2 + (4)
Now define ρ := 2 √ 2 √
ckt n , τ(1− ρ) := 2. If 0 < ρ < 1, then:
‖x̂[T+1] − x̂h(k)‖2 ≤ ρ(T+1)‖x̂h(k)‖2 + τ(‖x̂t(k)‖2 + ‖e‖2) (5) Moreover for any 0 < < 1 and any T ≥ (
log(1/ )+log(‖x̂h(k)‖2) log(1/ρ)
) , we get:
‖x̂[T+1] − x̂h(k)‖2 ≤ τ(‖x̂t(k)‖2 + ‖e‖2) + (6)
Let us explain how to interpret the recovery guarantees provided by Theorem 1. The inequalities (3), (4), (5), (6) provide an upper bound on the size of ‖x̂[T+1] − x̂h(k)‖2. Since F is a unitary matrix, ‖x̂[T+1] − x̂h(k)‖2 equals ‖Fx̂[T+1] − Fx̂h(k)‖2, which is the difference between the reconstructed image Fx̂[T+1] and the compressed image Fx̂h(k) (which is a compressed version of the original image x). So the inequalities of Theorem 1 tell us how close the reconstructed image must be to the compressed image, and thus indicates how confident we should be that the classification of the reconstructed image will agree with the classification of the compressed image. In other words, the inequalities tell us how likely it is that the CRD scheme using IHT will be able to recover the correct class of the original image, and thus defend the classifier from the adversarial attack. The presence of the norm of the tail x̂t(k) in the upper bounds indicates that the CRD scheme should be more effective when the original image is closer to being perfectly k-sparse in the transformed basis. The ratio kt/n in the upper bounds (via ρ and τ ) suggests that smaller values of k and t relative to n (i.e., sparser transformed images x̂ and error vectors e) will lead to the CRD being more effective. The experiments in Section 4 will demonstrate these phenomena.
Let us compare Theorem 1 to the similar Theorem 2.2 of Bafna et al. (2018). We observe that (3) and (4) allow larger values of k and t than Theorem 2.2 of Bafna et al. (2018). This is because the authors of Bafna et al. (2018) prove their results using Theorem 4 of Baraniuk et al. (2010), which is more restrictive for the values of k and t. We do not use Theorem 4 of Baraniuk et al. (2010). Instead we use (a modified form of) Theorem 6.18 of Foucart & Rauhut (2017) to get (3) and (4). Both Theorem 4 of Baraniuk et al. (2010) (used by Bafna et al. (2018)) and Theorem 6.18 of Foucart & Rauhut (2017) (used by us here) take as input the Restricted Isometry Property (RIP) stated in Theorem 7. We and Bafna et al. (2018) both essentially prove the same RIP, although the proof methods are different. We use a standard Gershgorin disc theorem argument to bound eigenvalues, while Bafna et al. (2018) perform a direct estimation using the triangle inequality and AM-GM inequality.
We turn now to (5) and (6), which provide recovery guarantees for larger values of k and t than (3) and (4), at the expense of the extra error term ‖e‖2. Our proof of (5) and (6) is novel. It relies on explicitly expanding one iteration of IHT in matrix form and using the structure of the resulting matrix form to bound the approximation error at iteration T in terms of the error at iteration T − 2. We then use an inductive argument as in Theorem 6.18 of Foucart & Rauhut (2017) to get (5) and (6).
Next, we consider the recovery error for `0-norm bounded noise with BP instead of IHT. We note that since Algorithm 2 is not adapted to the (k, t)-sparse structure of vector to be recovered, we do not expect the guarantees to be particularly strong. However, providing bounds for BP is useful as there are cases when BP provides recovery guarantees for when recovering a larger number of coefficients (k) and a larger `0 noise budget (t) than IHT.
Theorem 2 (`0-norm BP). Assume |Fij |2 ≤ cn . Define
δk,t =
√ ckt
n , β =
√ max{k, t}c
n , θ =
√ k + t
(1− δk,t) β, τ =
√ 1 + δk,t
1− δk,t
If 0 < δk,t < 1 and 0 < θ < 1, then for x# = BP(y,A, ‖x̂t(k)‖2), we have the error bound ‖x̂# − x̂h(k)‖2 ≤ ( 2τ √ k + t
1− θ
( 1 +
β
1− δk,t
) + 2τ ) ‖x̂t(k)‖2 (7)
where we write x# = [x̂#, e#]T ∈ C2n with x̂#, e# ∈ Cn.
Note that the recovery error in (7) is O(( √ k + t)‖x̂t(k)‖2), which means that we should not expect recovery to be close when the attacker has a large `0 noise budget or when x̂ is not sparse. Also observe that the recovered vector x̂# is not necessarily k-sparse. The recovery error still captures the difference in the original image Fx̂ and the reconstructed image Fx̂#, where a smaller recovery error should once again indicate that our classifier would make the correct prediction. Our third result covers the case when the noise is bounded in `2-norm. Theorem 3 (`2-norm BP). If ‖e‖2 ≤ η, then for x# = BP(y, F, η), we have the error bound
‖x# − x̂‖1 ≤ 2 ( ‖x̂t(k)‖1 + 2 √ kη )
(8)
‖x# − x̂‖2 ≤ 2√ k ‖x̂t(k)‖1 + 6η (9)
Finally, we provide recovery guarantees when the noise is bounded in `∞-norm. Theorem 4 (`∞-norm DS). If ‖e‖∞ ≤ η, then for x# = DS(y, F, η), we have the error bound
‖x# − x̂‖1 ≤ 2 ( ‖x̂t(k)‖1 + 2k √ nη )
(10)
‖x# − x̂‖2 ≤ 2√ k ‖x̂t(k)‖1 + 6
√ knη (11)
The proofs of Theorem 3 and Theorem 4 are based on standard arguments in compressive sensing that rely on establishing the so-called robust null space property of the matrix. Note that the results of Theorem 3 and Theorem 4 also bound the norm difference of the original image Fx̂ and the reconstructed image Fx̂#, where x̂# has no sparsity guarantees. Next, observe that the results of Theorem 4 incur a factor of √ n in the error bounds due to the constraint ‖A∗(Az − y)‖∞ ≤ √ nη in Algorithm 3 which is required to prove the robust null space property. Finally, we note that the additional constraint added to Algorithm 3 does not affect the proof of Theorem 4.
3.4 RELATED WORK
The authors of Bafna et al. (2018) introduced the CRD framework which inspired this work. In fact, Theorem 2.2 of Bafna et al. (2018) also provides an approximation error bound for recovery via IHT. Note that a hypothesis t = O(n/k) has accidentally been dropped from their Theorem 2.2, though it appears in their Lemma 3.6. By making the implied constants explicit in the argument of Bafna et al. (2018), one sees that their Theorem 2.2 is essentially the same as (3) and (4) in Theorem 1 above. For more details, see the proof of Theorem 1 in Appendix A. Note that our recovery error bounds for IHT in (5) and (6) of Theorem 1 do not have analogs in Bafna et al. (2018). They hold for larger values of k and t at the expense of the additional error term ‖e‖2. Other works that provide guarantees include (Hein & Andriushchenko (2017)) and (Cisse et al. (2017)) where the authors frame the problem as one of regularizing the Lipschitz constant of a network and give a lower bound on the norm of the perturbation required to change the classifier decision. The authors of Sinha et al. (2017) use robust optimization to perturb the training data and provide a training procedure that updates parameters based on worst case perturbations. A similar approach to (Sinha et al. (2017)) is (Wong & Kolter (2017)) in which the authors use robust optimization to provide lower bounds on the norm of adversarial perturbations on the training data.
In Lecuyer et al. (2018), the authors use techniques from Differential Privacy (Dwork et al. (2014))in order to augment the training procedure of the classifier to improve robustness to adversarial inputs. Another approach using randomization is Li et al. (2018) in which the authors add i.i.d. Gaussian noise to the input and provide guarantees of maintaining classifier predictions as long as the `2-norm of the attack vector is bounded by a function that depends on the output of the classifier.
Most defenses against adversarial inputs do not come with theoretical guarantees. Instead, a large body of research has focused on finding practical ways to improve robustness to adversarial inputs by either augmenting the training data (Goodfellow et al. (2015)), using adversarial inputs from various networks (Tramèr et al. (2017)), or by reducing the dimensionality of the input (Xu et al. (2017)). For instance, Madry et al. (2017) use robust optimization to make the network robust to worst case adversarial perturbations on the training data. However, the effectiveness of their approach is determined by the amount and quality of training data available and its similarity to the distribution of the test data. An approach similar to ours but without any theoretical guarantees is (Samangouei et al. (2018)). In this work, the authors use Generative Adversarial Networks (GANs) to estimate the distribution of the training data and during inference, use a GAN to reconstruct a non-adversarial input that is most similar to a given test input. We now provide a brief overview on the field of compressive sensing.
Though some component ideas originated earlier in other fields, the field of compressive sensing was initiated with the work of Candès et al. (2006) and Donoho et al. (2006) in which the authors studied the problem of reconstructing sparse signals using only a small number of measurements with the choice of a random matrix. The reconstruction was performed using `1-minimization (i.e., Basis Pursuit) which was shown to produce sparse solutions even in presence of noise; see also Donoho & Elad (2003; 2006); Donoho & Huo (2001). Some of the earlier work in extending compressive sensing to perform stable recovery with deterministic matrices was done by Candes & Tao (2005) and Candes et al. (2006), where the authors showed that recovery of sparse vectors could be performed as long as the measurement matrix satisfied a restricted isometry hypothesis. Blumensath & Davies (2009) introduced IHT as an algorithm to recover sparse signals which was later modified in Baraniuk et al. (2010) to reduce the search space as long as the sparsity was structured. The standard DS algorithm was introduced in Candes et al. (2007) in order to perform stable recovery in the presence of `∞ noise.
4 EXPERIMENTS
All of our experiments are conducted on CIFAR-10 (Krizhevsky (2009)), MNIST (LeCun), and Fashion-MNIST (Xiao et al. (2017)) datasets with pixel values of each image normalized to lie in [0, 1]. Each experiment is conducted on a set of 1000 points sampled uniformly at random from the test set of the respective dataset. For every experiment, we use the Discrete Cosine Transform (DCT) and the Inverse Discrete Cosine Transform (IDCT) denoted by the matrices F ∈ Rn×n and FT ∈ Rn×n respectively. That is, for an adversarial image y ∈ R √ n× √ n, such that, y = x+e, we let x̂ = Fx, and x = FT x̂, where x, x̂ ∈ Rn and e ∈ Rn is the noise vector. For an adversarial image y ∈ R √ n× √ n×c, that contains c channels, we perform recovery on each channel independently by considering ym = xm + em, where x̂m = Fxm, xm = FT x̂m for m = 1, . . . , c. The value k denotes the number of largest (in absolute value) DCT coefficients used for reconstruction of each channel, and the value t denotes the `0 noise budget for each channel. We implement Algorithm 2 and Algorithm 3 using the open source library CVXPY (Diamond & Boyd (2016)).
We now outline the neural network architectures used for experiments in Section 4.1 and 4.2. For CIFAR-10, we use the network architecture of He et al. (2016) while the network architecture for MNIST and Fashion-MNIST datasets is provided in Table 5 of the Appendix. We train our networks using the Adam optimizer for CIFAR-10 and the AdaDelta optimizer for MNIST and FashionMNIST. In both cases, we use a cross-entropy loss function. We train the each neural network according to the CRD framework stated in Section 3.1. The code to reproduce our experiments is available here: https://github.com/anonymousiclrcompressive/iclr2020.
77.4% 0.0% 71.8%
4.1 DEFENSE AGAINST `0-NORM ATTACKS
This section is organized as follows: first we examine CRD against the One Pixel Attack (OPA) (Su et al. (2019)) for CIFAR-10. We only test the attack on CIFAR-10 as it is most effective against natural images and does not work well on MNIST or FASHION-MNIST. We note that this attack satisfies the theoretical constraints for t provided in Theorem 1, hence allowing us to test how well CRD works within existing guarantees. Once we establish the effectiveness of CRD against OPA, we then test it against two other `0-norm bounded attacks: Carlini and Wagner (CW) `0-norm attack (Carlini & Wagner (2017)) and the Jacobian based Saliency Map Attack (JSMA) (Papernot et al. (2016)).
4.1.1 ONE PIXEL ATTACK
We first resize all CIFAR-10 images to 125 × 125 × 3 while maintaining aspect ratios to ensure that the data falls under the hypotheses of Theorem 1 even for large values of k. The OPA attack perturbs exactly one pixel of the image, leading to an `0 noise budget of t = 3 per image. The `0 noise budget of t = 3 per image allows us to use k = 275 per channel. Table 1 shows that OPA is very effective against natural images and forces the network to mis-classify all previously correctly classified inputs.
We test the performance of CRD in two ways: a) reconstruction quality b) network performance on reconstructed images. In order to analyse the reconstruction quality of Algorithm 1, we do the following: for each test image, we use OPA to perturb the image and then use Algorithm 1 to approximate its largest (in absolute value) k = 275 DCT co-efficients. We then perform the IDCT on these recovered co-efficients to generate reconstructed images. We illustrate reconstruction on a randomly selected image from the test set in Figure 1.
Noting that Algorithm 1 leads to high quality reconstruction, we now test whether network accuracy improves on these reconstructed images. To do so, we feed these reconstructed images as input to the network and report its accuracy in Table 1. We note that network performance does indeed improve as network accuracy goes from 0.0% to 71.8% using Algorithm 1. Therefore, we conclude that CRD provides a substantial improvement in accuracy in against OPA.
4.1.2 CW-`0 ATTACK AND JSMA
Having established the effectiveness of CRD against OPA, we move onto the CW `0-norm attack and JSMA. We note that even when t is much larger than the hypotheses of Theorem 1 and Theorem 2, we find that Algorithms 1 and 2 are still able to defend the network. We hypothesize that this maybe related to the behavior of the RIP of a matrix for “most” vectors as opposed to the RIP for all vectors, and leave a more rigorous analysis for a follow up work.
We follow the procedure described in Section 4.1.1 to analyze the quality of reconstructions for Algorithm 1 and Algorithm 2 in Fig 2. In each case it can be seen that both algorithms provide high quality reconstructions for values of t that are well outside the hypotheses required by Theorem 1 and Theorem 2. We report these t values and the improvement in network performance on reconstructed adversarial images using CRD in Table 2.
4.2 DEFENSE AGAINST `2-NORM ATTACKS
In the case of `2-norm bounded attacks, we use the CW `2-norm attack (Carlini & Wagner (2017)) and the Deepfool attack (Moosavi-Dezfooli et al. (2016)) as they have been shown to be the most powerful. We note that Theorem 3 does not impose any restrictions on k or t and therefore the guarantees of equations (8) and (9) are applicable for recovery in all experiments of this section.
The reconstruction quality is shown in Figure 3. It can be noted that reconstruction using Algorithm 2 is of high quality for all three datasets. In order to check whether this high quality reconstruction also leads to improved performance in network accuracy, we test each network on reconstructed images using Algorithm 2. We report the results in Table 3 and note that Algorithm 2 provides a substantial improvement in network accuracy for each dataset and each attack method used.
4.3 DEFENSE AGAINST `∞-NORM ATTACKS
For `∞-norm bounded attacks, we use the BIM attack (Kurakin et al. (2016)) as it is has been shown to be very effective and also allows us to control the `∞-norm of the attack vector explicitly. We note that while the CW `∞-norm attack (Carlini & Wagner (2017)) has the ability to create attack vectors with `∞-norm less than or equal to BIM, it is computationally expensive and also does not allow one to pre-specify a value for the `∞-norm of an attack vector. Therefore, we limit our experimental analysis to the BIM attack. Note that for any attack vector e, ‖e‖2 ≤ √ n‖e‖∞ hence allowing `∞- norm attacks to create attack vectors with large `2-norm. Therefore, we could expect reconstruction quality and network accuracy to be lower when compared to `2-norm attacks.
In figure 4, we compare the reconstruction quality of images reconstructed with Algorithm 3 to those reconstructed using DS without the additional constraint. As can be noted from the figure, images reconstructed using DS without the additional constraint may not produce meaningful images. This is also reflected in Table 4, which shows that the accuracy of the network is roughly random on images reconstructed without the additional constraint.
We show examples of original images, adversarial images, and their reconstructions using Algorithm 3 in Figure 5. Finally, we report the network performance on reconstructed inputs using Algorithm 3 in Table 4 and also compare this to the performance on inputs reconstructed using DS without the additional constraint. We note that Algorithm 3 provides an increase in network performance against reconstructed adversarial inputs. However, the improvement in performance is not as substantial as it was against `0 or `2-norm attacks.
5 CONCLUSION
We provided recovery guarantees for corrupted signals in the case of `0-norm, `2-norm, and `∞- norm bounded noise. We were able to utilize these results in CRD and improve the performance of neural networks substantially in the case of `0-norm, `2-norm and `∞-norm bounded noise. While `0-norm attacks don’t always satisfy the constraints required by Theorem 1 and Theorem 2, we showed that CRD is still able to provide a good defense for values of t much larger than allowed in the guarantees. The guarantees of Theorem 3 and Theorem 4 were applicable in all experiments and CRD was shown to improve network performance for all attacks.
A APPENDIX
A.1 RESTRICTED ISOMETRY PROPERTY
We first establish the restricted isometry property for certain structured matrices. First, we give some definitions.
Definition 5. Let A be a matrix in Cm×N , let M ⊆ CN , and let δ ≥ 0. We say that A satisfies the M -restricted isometry property (or M-RIP) with constant δ if
(1− δ)‖x‖22 ≤ ‖Ax‖22 ≤ (1 + δ)‖x‖22 for all x ∈M . Definition 6. We define Mk to be the set of all k-sparse vectors in CN and similarly define Mk,t to be the set of (k, t)-sparse vectors in C2n. In other words, Mk,t is the following subset of C2n:
Mk,t = { x = [x1 x2] T ∈ C2n : x1 ∈ Cn, x2 ∈ Cn, ‖x1‖0 ≤ k, ‖x2‖0 ≤ t }
We define Sk,t to be the following collection of subsets of {1, . . . , 2n}:
Sk,t = {S1 ∪ S2 : S1 ⊆ {1, . . . , n} , S2 ⊆ {n+ 1, . . . , 2n} , card(S1) ≤ k, card(S2) ≤ t}
Note that Sk,t is the collection of supports of vectors in Mk,t.
Theorem 7. Let A = [F I] ∈ Cn×2n, where F ∈ Cn×n is a unitary matrix with |Fij |2 ≤ cn and I ∈ Cn×n is the identity matrix. Then(
1− √ ckt
n
) ‖x‖22 ≤ ‖Ax‖22 ≤ ( 1 + √ ckt
n
) ‖x‖22 (12)
for all x ∈Mk,t. In other words, A satisfies the Mk,t-RIP property with constant √ ckt
n .
Proof. In this proof, ifB denotes an matrix in Cn×n, then λ1(B), . . . , λn(B) denote the eigenvalues of B ordered so that |λ1(B)| ≤ · · · ≤ |λn(B)|. It suffices to fix an S = S1 ∪ S2 ∈ Sk,t and prove (12) for all non-zero x ∈ CS . SinceA∗SAS is normal, there is an orthonormal basis of eigenvectors u1, . . . , uk+t forA ∗ SAS , where
ui corresponds to the eigenvalue λi(A∗SAS). For any non-zero x ∈ CS , we have x = ∑k+t i=1 ciui for some ci ∈ C, so
‖Ax‖22 ‖x‖22 = 〈A∗SASx, x〉 〈x, x〉 =
∑k+t i=1 λi(A ∗ SAS)c
2 i∑k+t
i=1 c 2 i
. (13)
Thus it will suffice to prove that |λi(A∗SAS)− 1| ≤ √ ckt n for all i. Moreover,
|λi(A∗SAS)− 1| = |λi(A∗SAS − I)| = √ λi ((A∗SAS − I)∗(A∗SAS − I)) (14)
where the last equality holds because A∗SAS − I is normal. By combining (13) and (14), we see that (12) will hold upon showing that the eigenvalues of (A∗SAS − I)∗(A∗SAS − I) are bounded by ckt/n.
So far we have not used the structure ofA, but now we must. Observe that (A∗SAS−I)∗(A∗SAS−I) is a block diagonal matrix with two diagonal blocks of the formX∗X andXX∗. Therefore the three matrices (A∗SAS−I)∗(A∗SAS−I),X∗X , andXX∗ have the same non-zero eigenvalues. Moreover, X is simply the matrix FS1 with those rows not indexed by S2 deleted. The hypotheses on F imply that the entries of X∗X satisfy |(X∗X)ij | ≤ ctn . So the Gershgorin disc theorem implies that each eigenvalue λ of X∗X and (hence) of (A∗SAS − I)∗(A∗SAS − I) satisfies |λ| ≤ cktn .
A.2 ITERATIVE HARD THRESHOLDING
First we present Theorem 8 and then use it to prove Theorem 1.
Theorem 8. Let A ∈ Cn×2n be a matrix. Let 1 ≤ k, t ≤ n be positive integers and suppose δ3 is a M3k,3t-RIP constant for A and that δ2 is a M2k,2t-RIP constant for A. Let x ∈ C2n, r ∈ Cn, y = Ax + r, and S ∈ Sk,t. Letting x[T+1] = IHT (y,A, k, t, T ), if δ3 < 1/ √ 3, then we have the approximation error bound
‖x[T+1] − xS‖2 ≤ ρ(T+1)‖x[0] − xS‖2 + τ‖AxS + r‖2
where ρ := √ 3δ3 < 1 and (1− ρ)τ = √ 3 √ 1 + δ2 ≤ 2.18. Thus, the first term on the right goes to 0 as T goes to∞.
Theorem 8 is a modification of Theorem 6.18 of Foucart & Rauhut (2017). More specifically, Theorem 6.18 of Foucart & Rauhut (2017) considers M3k, M2k, and Sk in place of M3k,3t and M2k,2t and Sk,t and any dimension N in place of 2n. The proofs are very similar, so we omit the proof of Theorem 8. We will now prove a lemma that will be required for the proof of Theorem 1. For the proof of Lemma 9 and Theorem 1, we use the following convention: let A ∈ Cm×N be a matrix, then, we denote by (A)S , the m×N matrix that is obtained by starting with A and zeroing out the columns indexed by S. Note that (A)S = A− (A)S . Lemma 9. Let F ∈ Cn×n be a unitary matrix with |Fij |2 ≤ cn and let S ⊆ [n] be a index set with |S| = t. Then for any k-sparse vector z ∈ Cn, we have:
‖(F ∗)SFz‖22 ≤ ktc
n ‖z‖22
Proof of Lemma 9. First note that (F ∗)S ∈ Cn×n contains only t non-zero columns since |S| = t Therefore, we have |((F ∗)SF )ij | ≤ tcn since |Fij |
2 ≤ cn . Further, since the non-zero columns of (F ∗)S are orthogonal to each other, we get ((F ∗)S)∗(F ∗)S = (I)S , where I ∈ Cn×n is the identity matrix. Using this, we have for any w ∈ Cn,
‖(F ∗)SFw‖22 = 〈(F ∗)SFw, (F ∗)SFw〉 = 〈((F ∗)SF )∗(F ∗)SFw,w〉 = 〈(F ∗)SFw,w〉 = | 〈(F ∗)SFw,w〉 |
Now let V ⊆ [n] be any index set with cardinality k, that is |V | = k and let z ∈ Cn be any vector supported on V . We then get,
‖(F ∗)SFz‖22 = |〈(F ∗)SFz, z〉| = ∣∣∣∣∣∣ ∑ k∈V z∗k ∑ j∈V ((F ∗)SF )kjzj ∣∣∣∣∣∣ ≤ ∑ k∈V |z∗k| ∑ j∈V |((F ∗)SF )kj ||zj | ≤ ∑ k∈V |z∗k| tc n ∑ j∈V |zj |
= tc
n ‖z‖21 ≤
ktc
n ‖z‖22
where we use the fact that z is k-sparse for the last inequality.
Now we provide the proof for Theorem 1.
Proof of Theorem 1. Theorem 7 implies that the statement of Theorem 8 holds with δ3 = √ c·3k·3t n and δ2 = √ c·2k·2t n . Noting that y = A[x̂h(k) e]
T + Fx̂t(k), where [x̂h(k) e]T ∈ Mk,t, set x[T+1] = IHT (y,A, k, t, T ) and apply Theorem 8 with x = [x̂h(k) e]T , r = Fx̂t(k), and S = supp(x). Letting x[T+1] = [x̂[T+1] e[T+1]]T , use the facts that ‖x̂[T+1]− x̂h(k)‖2 ≤ ‖x[T+1]−
xS‖2 and ‖Fx̂t(k)‖2 = ‖x̂t(k)‖2. That will give (3). Letting T = ( log(1/ )+log( √ ‖x̂h(k)‖22+‖e‖22)
log(1/ρ)
) ,
gives ρT √ ‖x̂h(k)‖22 + ‖e‖22 ≤ , which can be substituted in (3) to get (4). Noting that ||e[T ]−e||2 ≤ τ ||x̂t(k)||2 + , we can use the same reasoning as used in Bafna et al. (2018) to get:
‖x̂[T+1] − x̂h(k)‖∞ ≤ √ 2ct
n
( τ‖x̂t(k)‖2 + ) (15)
‖x̂[T+1] − x̂h(k)‖2 ≤ √ 4ckt
n
( τ‖x̂t(k)‖2 + ) (16)
which are the essentially the same as the results of Theorem 2.2 in Bafna et al. (2018).
Now we prove (5). Write x[T ] = (z[T ])h(k,t), where z[T ] = x[T−1] + A∗(y − Ax[T−1]). Further, write z[T ] = [z[T ]1 z [T ] 2 ] T ∈ C2n, where z[T ]1 , z [T ] 2 ∈ Cn. Note that x̂[T ] = (z [T ] 1 )h(k). Therefore, we have z[T ]1 = F ∗(y − e[T−1]), where e[T−1] = (y − Fx̂[T−2])h(t). Now let S be the set of indices selected by the hard thresholding operation h(t) to get e[T−1]. Then observe that z[T ]1 = F ∗(y− (y−Fx̂[T−2])S). Next, note that ‖z[T ]1 − x̂[T ]‖22 ≤ ‖z [T ] 1 − x̂h(k)‖22 as x̂[T ] is a best k-sparse approximation to z[T ]1 . We can thus write,
‖(z[T ]1 − x̂h(k))− (x̂[T ] − x̂h(k))‖22 = ‖z [T ] 1 − x̂h(k)‖22 − 2Re〈z [T ] 1 − x̂h(k), x̂[T ] − x̂h(k)〉+ ‖x̂[T ] − x̂h(k)‖22
Therefore, we have,
‖x̂[T ] − x̂h(k)‖22 ≤ 2Re〈z [T ] 1 − x̂h(k), x̂[T ] − x̂h(k)〉
≤ 2|〈z[T ]1 − x̂h(k), x̂[T ] − x̂h(k)〉|
≤ 2‖z[T ]1 − x̂h(k)‖2‖x̂[T ] − x̂h(k)‖2
If ‖x̂[T ] − x̂h(k)‖2 > 0, then ‖x̂[T ] − x̂h(k)‖2 ≤ 2‖z [T ] 1 − x̂h(k)‖2. Now note that
z [T ] 1 = x̂+ F ∗e− F ∗(F (x̂− x̂[T−2]) + e)S = x̂+ F ∗e− (F ∗)S(F (x̂− x̂[T−2]) + e) = x̂+ (F ∗ − (F ∗)S)e− (F ∗)SF (x̂− x̂[T−2])
Using the fact that (F ∗)S = F ∗ − (F ∗)S , we can simplify the above to get:
‖z[T ]1 − x̂h(k)‖2 = ‖(F ∗)SFx̂t(k) + (F ∗)Se− (F ∗)SF (x̂h(k) − x̂[T−2])‖2 Therefore,
‖x̂[T ] − x̂h(k)‖2 ≤ 2 ( ‖(F ∗)SF‖2→2‖x̂t(k)‖2 + ‖(F ∗)S‖2→2‖e‖2 + ‖(F ∗)SF (x̂h(k) − x̂[T−2])‖2 ) ≤ 2 ( ‖x̂t(k)‖2 + ‖e‖2 ) + 2‖(F ∗)SF (x̂h(k) − x̂[T−2])‖2
where we use ‖(F ∗)S‖2→2 ≤ ‖F ∗‖2→2 = 1. Now since x̂h(k) − x̂[T−2] is 2k-sparse, we can use the result of Lemma 9 to get:
‖x̂[T ] − x̂h(k)‖2 ≤ 2 ( ‖x̂t(k)‖2 + ‖e‖2 ) + 2
(√ 2ktc
n
) ‖x̂[T−2] − x̂h(k)‖2
Now let ρ = 2 √ 2 √
ktc n , τ(1− ρ) = 2 and note that if ρ < 1, we can use induction on T to get (5). Then for any 0 < < 1 and any T ≥ (
log(1/ )+log(‖x̂h(k)‖2) log(1/ρ) ) , we have ρT (‖x̂h(k)‖2) ≤ which
gives us (6).
A.3 BASIS PURSUIT
Definition 10. The matrix A ∈ Cm×N satisfies the robust null space property with constants 0 < ρ < 1, τ > 0 and norm ‖ · ‖ if for every set S ⊆ [N ] with card(S) ≤ s and for every v ∈ CN we have ‖vS‖1 ≤ ρ‖vS‖1 + τ‖Av‖ Definition 11. The matrix A ∈ Cm×N satisfies the `q robust null space property of order s with constants 0 < ρ < 1, τ > 0 and norm ‖ · ‖ if for every set S ⊆ [N ] with card(S) ≤ s and for every v ∈ CN we have
‖vS‖q ≤ 1
s1−1/q ρ‖vS‖1 + τ‖Av‖
Note that if q = 1 then this is simply the robust null space property.
The proof of Theorem 2 requires the following theorem (whose full proof is given in the Foucart & Rauhut (2017)). Theorem 12 (Theorem 4.33 in Foucart & Rauhut (2017)). Let a1, . . . , aN be the columns of A ∈ Cm×N , let x ∈ CN with s largest absolute entries supported on S, and let y = Ax + e with ‖e‖2 ≤ η. For δ, β, γ, θ, τ ≥ 0 with δ < 1, assume that:
‖A∗SAS − I‖2→2 ≤ δ, max l∈S ‖A∗Sal‖2 ≤ β,
and that there exists a vector u = A∗h ∈ CN with h ∈ Cm such that ‖uS − sgn(xS)‖2 ≤ γ, ‖uS‖∞ ≤ θ, and ‖h‖2 ≤ τ √ s.
If ρ := θ + βγ(1−δ) < 1, then a minimizer x
# of ‖z‖1 subject to ‖Az − y‖2 ≤ η satisfies:
‖x# − x‖2 ≤ 2
(1− ρ)
( 1 +
β
(1− δ)
) ‖xS‖1 + ( 2(µγ + τ √ s)
1− ρ
( 1 + β
1− δ
) + 2µ ) η
where µ := √
1+δ 1−δ and sgn(x)i = 0, xi = 0 1, xi > 0
−1. xi < 0 .
Lemma 13. LetA ∈ Cn×2n, if ‖Ax‖22 ≤ (1+δ)‖x‖22 for all x ∈Mk,t, then, ‖A∗SAS−I‖2→2 ≤ δ, for any S ∈ Sk,t.
Proof. Let S ∈ Sk,t be given. Then for any x ∈ CS , we have
‖ASx‖22 − ‖x‖22 ≤ δ‖x‖22 We can re-write this as : ‖ASx‖22 − ‖x‖22 = 〈ASx,ASx〉 − 〈x, x〉 = 〈(A∗SAS − I)x, x〉. Noting that A∗SAS − I is Hermitian, we have:
‖A∗SAS − I‖2→2 = max x∈CS\{0} 〈(A∗SAS − I)x, x〉 ‖x‖22 ≤ δ
Proof of Theorem 2. We will derive (7) by showing that the matrix A satisfies all the hypotheses in Theorem 12 for every vector in Mk,t.
First note that by Theorem 7,A satisfies theMk,t-RIP property with constant δk,t := √ ckt n . Therefore, by Lemma 13, for any S ∈ Sk,t, we have ‖A∗SAS − I‖2→2 ≤ δk,t. Since A∗SAS is a positive semi-definite matrix, it has only non-negative eigenvalues that lie in the range [1 − δk,t, 1 + δk,t]. Since δk,t < 1 by assumption, A∗SAS is injective. Thus, we can set: h = AS(A ∗ SAS)
−1sgn(xS) and get:
‖h‖2 = ‖AS(A∗SAS)−1sgn(xS)‖2 ≤ ‖AS‖2→2‖(A∗SAS)−1‖2→2‖sgn(xS)‖2 ≤ τ √ k + t
where τ = √
1+δk,t 1−δk,t and we have used the following facts: since ‖A ∗ SAS − I‖2→2 ≤ δk,t < 1, we get that ‖(A∗SAS)−1‖2→2 ≤ 11−δk,t and that the largest singular value of AS is less than √ 1 + δk,t. Now let u = A∗h, then ‖uS − sgn(xS)‖2 = 0. Now we need to bound the value ‖uS‖∞. Denoting row j of A∗
S AS by the vector vj , we see that it has at most max{k, t} non-zero entries and that
|(vj)l|2 ≤ cn for l = 1, . . . , (k + t). Therefore, for any element (uS)j , we have:
|(uS)j | = |〈(A ∗ SAS) −1sgn(xS), (vj)∗〉| ≤ ‖(A∗SAS)−1‖2→2‖sgn(xS)‖2‖vj‖2 ≤ √ k + t
1− δk,t
√ max{k, t}c
n Defining β := √
max{k,t}c n and θ :=
√ k+t
1−δk,t β, we get ‖uS‖∞ ≤ θ < 1 and also observe that maxl∈S ‖A∗Sal‖2 ≤ β. Therefore, all the hypotheses of Theorem 12 have been satisfied. Note that y = Fx̂+ e = A[x̂h(k) e]
T + Fx̂t(k), Therefore, setting x# = BP(y,A, ‖x̂t(k)‖2), we use the fact ‖Fx̂t(k)‖2 = ‖x̂t(k)‖2 combined with the bound in Theorem 12 to get (7):
‖x̂# − x̂h(k)‖2 ≤ ( 2τ √ k + t
1− θ
( 1 +
β
1− δk,t
) + 2τ ) ‖x̂t(k)‖2
where we write x# = [x̂#, e#]T with x̂#, e# ∈ Cn.
We now focus on proving Theorem 3. In order to do so, we will need some lemmas that will be used in the main proof.
Lemma 14. If a matrix A ∈ Cm×N satisfies the `2 robust null space property for S ⊂ [N |, with card(S) = s, then it satisfies the `1 robust null space property for S with constants 0 < ρ < 1, τ ′ := τ √ s > 0.
Proof. For any v ∈ CN , ‖vS‖2 ≤ ρ√s‖vS̄‖1 + τ‖Av‖. Then, using the fact that ‖vS‖1 ≤ √ s‖vS‖2, we get:‖vS‖1 ≤ ρ‖vS̄‖1 + τ √ s‖Av‖.
Lemma 15 (Theorem 4.20 in Foucart & Rauhut (2017)). If a matrix A ∈ Cm×N satisfies the `1 robust null space property (with respect to ‖.‖) and for 0 < ρ < 1 and τ > 0 for S ⊂ [N |, then:
‖z − x‖1 ≤ 1 + ρ
1− ρ (‖z‖1 − ‖x‖1 + 2‖xS̄‖1) +
2τ
1− ρ ‖A(z − x)‖
for all z, x ∈ CN . Lemma 16 (Proposition 2.3 in Foucart & Rauhut (2017)). For any p > q > 0 and x ∈ Cn,
inf z∈Mk
‖x− z‖p ≤ 1
(k) 1 q− 1 p
‖x‖q
Proof of Theorem 3. Let 0 < ρ < 1 be arbitrary. Since F is a unitary matrix, for any S ⊆ [n] and v ∈ Cn, we have
‖vS‖2 ≤ ρ√ k ‖vS‖1 + τ‖v‖2 = ρ√ k ‖vS‖1 + τ‖Fv‖2 (17)
where τ = 1. Now let S ⊆ [n] such that card(S) ≤ k. Then, F satisfies the `2 robust null space property for S. Next, using Lemma 14 we get ‖vS‖1 ≤ ρ‖vS̄‖1 + τ √ k‖Fv‖2 for all v ∈ Cn. Now let x# = BP(y, F, η), then we know ‖x#‖1 ≤ ‖x̂‖1. Fixing S ⊆ [n] to be the support of x̂h(k) and using Lemma 15 , we get:
‖x# − x̂‖1 ≤ 1 + ρ
1− ρ (‖x#‖1 − ‖x̂‖1 + 2‖x̂t(k)‖1) +
2τ √ k 1− ρ ‖F (x# − x̂)‖2
≤ 1 + ρ 1− ρ
( 2‖x̂t(k)‖1 ) + 2τ √ k
1− ρ ‖F (x# − x̂)‖2
≤ 1 + ρ 1− ρ
( 2‖x̂t(k)‖1 ) + 4τ √ k
1− ρ ‖e‖2
≤ 1 + ρ 1− ρ
( 2‖x̂t(k)‖1 ) + 4τ √ k
1− ρ η
Letting ρ → 0 and recalling that τ = 1 gives (8). Now let S be the support of (x# − x̂)h(k). Note ‖(x# − x̂)S‖2 = infz∈Mk ‖(x# − x̂)− z‖2. Then, using Lemma 16 and (17), we see that
‖x# − x̂‖2 ≤ ‖(x# − x̂)S‖2 + ‖(x # − x̂)S‖2
≤ 1√ k ‖(x# − x̂)‖1 + ρ√ k ‖(x# − x̂)S‖1 + τ‖F (x # − x)‖2
≤ 1 + ρ√ k ‖(x# − x̂)‖1 + 2τη
≤ (1 + ρ) 2
√ k(1− ρ)
( 2‖x̂t(k)‖1 ) + 4τ(1 + ρ)
(1− ρ) η + 2τη
= (1 + ρ)2√ k(1− ρ)
( 2‖x̂t(k)‖1 ) +
( 4τ(1 + ρ)
(1− ρ) + 2τ
) η
Recalling τ = 1 and letting ρ→ 0 gives the desired result.
A.4 DANTZIG SELECTOR
Next we introduce the Dantzig Selector algorithm with an additional constraint. We first prove its recovery guarantees for `∞-norm and then explain the reasoning behind the additional constraint.
Proof of Theorem 4. The proof follows the same structure as the proof of Theorem 3. Therefore we provide a sketch and leave out the complete derivation. Let 0 < ρ < 1 be arbitrary. Since F is a unitary matrix, for any S ⊆ [n] and v ∈ Cn, we have
‖vS‖2 ≤ ρ√ k ‖vS‖1 + ‖vS‖2 ≤ ρ√ k ‖vS‖1 +
√ k‖v‖∞ =
ρ√ k ‖vS‖1 +
√ k‖F ∗Fv‖∞
The rest of the argument is the same as in the proof of Theorem 3. | 1. What is the focus and contribution of the paper on compressive recovery defense frameworks?
2. What are the strengths of the paper, particularly in its theoretical guarantees and experiment results?
3. Do you have any concerns regarding the presentation of the recovery algorithms and their comparison?
4. How does the reviewer assess the relevance and novelty of the paper's content regarding compressive sensing and defense against adversarial attacks?
5. What are the weaknesses of the paper, especially in its experiment section and comparisons with other works? | Review | Review
This paper extends the compressive recovery defense framework introduced by Bafna et al. (2018), which is mainly against l_0 attacks, to l_2 and l_∞ attacks. They provide guarantees for some recovery algorithms in the case of different kinds of norm bounded noises. The difference between their work and the previous work is clearly clarified.
Overall, this paper is a follow-up work towards Bafna et al. (2018) but with better theoretical guarantees and ample experiment results to support their robustness against various popular attacks. Given their contribution and inspiration for future work, I think this paper could be accepted to the 2020 ICLR conference.
In Section 3.2 Recovery Algorithms, the author clearly states three algorithms including IHT, BP, DS, and their modification from the standard ones, but fails to compare the differences between these algorithms. It is not clear about the author’s motivations to proposes these different recovery algorithms and whether their performance varies from each other also remains unknown. Maybe some analysis about their disadvantages and advantages in varied conditions of attacks could be necessary.
In the Section 3.4 Comparison to Related Work, the author mentions many works aiming at defending against adversarial inputs. However, Bafna et al. (2018) is the only work here that has something to do with compressive sensing. I think maybe the paper should involve some related work here regarding theory of compressive coding besides Bafna et al. (2018). And how they are combined to the defense against adversarial inputs. It would help the readers to have better understanding towards the novelty and breakthroughs in this aspect.
For the experiments, it would be better to have the comparisons between the proposed algorithm and related methods. Also, the proposed IHT and DS are modified versions. What are the differences in experiments?
Minor comments:
- Page 2: the line above the equation 1 ‘meaning that x_t (k)…..’, it could be (x_t ) ̂(k)
- Page 6: in the explanation of figure 2, the adversarial inputs are second column and fifth column not fourth column. |
ICLR | Title
Temperature Schedules for self-supervised contrastive methods on long-tail data
Abstract
Most approaches for self-supervised learning (SSL) are optimised on curated balanced datasets, e.g. ImageNet, despite the fact that natural data usually exhibits long-tail distributions. In this paper, we analyse the behaviour of one of the most popular variants of SSL, i.e. contrastive methods, on long-tail data. In particular, we investigate the role of the temperature parameter τ in the contrastive loss, by analysing the loss through the lens of average distance maximisation, and find that a large τ emphasises group-wise discrimination, whereas a small τ leads to a higher degree of instance discrimination. While τ has thus far been treated exclusively as a constant hyperparameter, in this work, we propose to employ a dynamic τ and show that a simple cosine schedule can yield significant improvements in the learnt representations. Such a schedule results in a constant ‘task switching’ between an emphasis on instance discrimination and group-wise discrimination and thereby ensures that the model learns both group-wise features, as well as instance-specific details. Since frequent classes benefit from the former, while infrequent classes require the latter, we find this method to consistently improve separation between the classes in long-tail data without any additional computational cost.
1 INTRODUCTION
Deep Neural Networks have shown remarkable capabilities at learning representations of their inputs that are useful for a variety of tasks. Especially since the advent of recent self-supervised learning (SSL) techniques, rapid progress towards learning universally useful representations has been made.
Currently, however, SSL on images is mainly carried out on benchmark datasets that have been constructed and curated for supervised learning (e.g. ImageNet (Deng et al., 2009), CIFAR (Krizhevsky et al., 2009), etc.). Although the labels of curated datasets are not explicitly used in SSL, the structure of the data still follows the predefined set of classes. In particular, the class-balanced nature of curated datasets could result in a learning signal for unsupervised methods. As such, these methods are often not evaluated in the settings they were designed for, i.e. learning from truly unlabelled data. Moreover, some methods (e.g. (Asano et al., 2019; Caron et al., 2020)) even explicitly enforce a uniform prior over the embedding or label space, which cannot be expected to hold for uncurated datasets.
In particular, uncurated, real-world data tends to follow long-tail distributions (Reed, 2001), in this paper, we analyse SSL methods on long-tailed data. Specifically, we analyse the behaviour of contrastive learning (CL) methods, which are among the most popular learning paradigms for SSL.
In CL, the models are trained such that embeddings of different samples are repelled, while embeddings of different ‘views’ (i.e. augmentations) of the same sample are attracted. The strength of those attractive and repelling forces between samples is controlled by a temperature parameter τ , which has been shown to play a crucial role in learning good representations (Chen et al., 2020c;a). To the best of our knowledge, τ has thus far almost exclusively been treated as a constant hyper-parameter.
In contrast, we employ a dynamic τ during training and show that this has a strong effect on the learned embedding space for long-tail distributions. In particular, by introducing a simple schedule for τ we consistently improve the representation quality across a wide range of settings. Crucially, these gains are obtained without additional costs and only require oscillating τ with a cosine schedule.
∗equal contribution. Code available at: github.com/annusha/temperature schedules
This mechanism is grounded in our novel understanding of the effect of temperature on the contrastive loss. In particular, we analyse the contrastive loss from an average distance maximisation perspective, which gives intuitive insights as to why a large temperature emphasises group-wise discrimination, whereas a small temperature leads to a higher degree of instance discrimination and more uniform distributions over the embedding space. Varying τ during training ensures that the model learns both group-wise and instance-specific features, resulting in better separation between head and tail classes.
Overall, our contributions are summarised as follows: • we carry out an extensive analysis of the effect of τ on imbalanced data; • we analyse the contrastive loss from an average distance perspective to understand the emergence of semantic structure; • we propose a simple yet effective temperature schedule that improves the performance across different settings; • we show that the proposed τ scheduling is robust and consistently improves the performance for different hyperparameter choices.
2 RELATED WORK
Self-supervised representation learning (SSL) from visual data is a quickly evolving field. Recent methods are based on various forms of comparing embeddings between transformations of input images. We divide current methods into two categories: contrastive learning (He et al., 2020; Chen et al., 2020c;a; Oord et al., 2018), and non-contrastive learning (Grill et al., 2020; Zbontar et al., 2021; Chen & He, 2021; Bardes et al., 2022; Wei et al., 2022; Gidaris et al., 2021; Asano et al., 2019; Caron et al., 2020; He et al., 2022). Our analysis concerns the structure and the properties of the embedding space of contrastive methods when training on imbalanced data. Consequently, this section focuses on contrastive learning methods, their analysis and application to imbalanced training datasets.
Contrastive Learning employs instance discrimination (Wu et al., 2018) to learn representations by forming positive pairs of images through augmentations and a loss formulation that maximises their similarity while simultaneously minimising the similarity to other samples. Methods such as MoCo (He et al., 2020; Chen et al., 2020c), SimCLR (Chen et al., 2020a;b), SwAV (Caron et al., 2020), CPC (Oord et al., 2018), CMC Tian et al. (2020a), and Whitening (Ermolov et al., 2021) have shown impressive representation quality and down-stream performance using this learning paradigm. CL has also found applications beyond SSL pre-training, such as multi-modal learning (Shvetsova et al., 2022), domain generalisation (Yao et al., 2022), semantic segmentation (Van Gansbeke et al., 2021), 3D point cloud understanding (Afham et al., 2022), and 3D face generation (Deng et al., 2020).
Negatives. The importance of negatives for contrastive learning is remarkable and noticed in many prior works (Wang et al., 2021; Yeh et al., 2021; Zhang et al., 2022; Iscen et al., 2018; Kalantidis et al., 2020; Robinson et al., 2020; Khaertdinov et al., 2022). Yeh et al. (2021) propose decoupled learning by removing the positive term from the denominator, Robinson et al. (2020) develop an unsupervised hard-negative sampling technique, Wang et al. (2021) propose to employ a triplet loss, and Zhang et al. (2022); Khaertdinov et al. (2022) propose to improve negative mining with the help of different temperatures for positive and negative samples that can be defined as input-independent or input-dependent functions, respectively. In contrast to explicitly choosing a specific subset of negatives, we discuss the Info-NCE loss (Oord et al., 2018) through the lens of an average distance perspective with respect to all negatives and show that the temperature parameter can be used to implicitly control the effective number of negatives.
Imbalanced Self-Supervised Learning. Learning on imbalanced data instead of curated balanced datasets is an important application since natural data commonly follows long-tailed distributions (Reed, 2001; Liu et al., 2019; Wang et al., 2017). In recent work, Kang et al. (2020), Yang & Xu (2020), Liu et al. (2021), Zhong et al. (2022), Gwilliam & Shrivastava (2022) discover that self-supervised learning generally allows to learn a more robust embedding space than a supervised counterpart. Tian et al. (2021) explore the down-stream performance of contrastive learning on standard benchmarks based on large-scale uncurated pre-training and propose a multi-stage distillation framework to overcome the shift in the distribution of image classes. Jiang et al. (2021); Zhou et al. (2022) propose to address the data imbalance by identifying and then emphasising tail samples during training in an unsupervised manner. For this, Jiang et al. (2021) compare the outputs of the trained model before and after pruning, assuming that tail samples are more easily ‘forgotten’ by the pruned model and can thus be identified. Zhou et al. (2022), use the loss value for each input to identify tail samples and then use stronger augmentations for those. Instead of modifying the architecture
or the training data of the underlying frameworks, we show that a simple approach—i.e. oscillating the temperature of the Info-NCE loss (Oord et al., 2018) to alternate between instance and group discrimination—can achieve similar performance improvements at a low cost.
Analysis of Contrastive Learning (CL). Given the success of CL in representation learning, it is essential to understand its properties. While some work analyses the interpretability of embedding spaces (Bau et al., 2017; Fong & Vedaldi, 2018; Laina et al., 2020; 2021), here the focus lies on understanding the structure and learning dynamics of the objective function such as in Saunshi et al. (2019); Tsai et al. (2020); Chen et al. (2021). E.g., Chen et al. (2021) study the role of the projection head, the impact of multi-object images, and a feature suppression phenomenon. Wen & Li (2021) analyse the feature learning process to understand the role of augmentations in CL. Robinson et al. (2021) find that an emphasis on instance discrimination can improve representation of some features at the cost of suppressing otherwise well-learned features. Wang & Isola (2020); Wang & Liu (2021) analyse the uniformity of the representations learned with CL. In particular, Wang & Liu (2021) focus on the impact of individual negatives and describe a uniformity-tolerance dilemma when choosing the temperature parameter. In this work, we rely on the previous findings, expand them to long-tailed data distributions and complement them with an understanding of the emergence of semantic structure.
3 METHOD
In the following, we describe our approach and analysis of contrastive learning on long-tailed data. For this, we will first review the core principles of contrastive learning for the case of uniform data (Sec. 3.1). In Sec. 3.2, we then place a particular focus on the temperature parameter τ in the contrastive loss and its impact on the learnt representations. Based on our analysis, in Sec. 3.3 we discuss how the choice of τ might negatively affect the learnt representation of rare classes in the case of long-tailed distributions. Following this, we describe a simple proof-of-concept based on additional coarse supervision to test our hypothesis. We then further develop temperature schedules (TS) that yield significant gains with respect to the separability of the learnt representations in Sec. 4.
3.1 CONTRASTIVE LEARNING
The Info-NCE loss is a popular objective for contrastive learning (CL) and has lead to impressive results for learning useful representations from unlabelled data (Oord et al., 2018; Wu et al., 2018; He et al., 2020; Chen et al., 2020a). Given a set of inputs {x1, . . . , xN}, and the cosine similarities sij between learnt representations ui=f(A(xi)) and vj=g(A(xj)) of the inputs, the loss is defined by:
Lc = N∑ i=1 − log exp (sii/τ) exp (sii/τ) + ∑ j ̸=i exp (sij/τ) . (1)
Here, A(·) applies a random augmentation to its input and f and g are deep neural networks. For a given xi, we will refer to ui as the anchor and to vj as a positive sample if i=j and as a negative if i ̸=j. Last, τ denotes the temperature of the Info-NCE loss and has been found to crucially impact the learnt representations of the model (Wang & Isola, 2020; Wang & Liu, 2021; Robinson et al., 2021).
Uniformity. Specifically, a small τ has been tied to more uniformly distributed representations, see Fig. 1. For example, Wang & Liu (2021) show that the loss is ‘hardness-aware’, i.e. negative samples closest to the anchor receive the highest gradient. In particular, for a given anchor, the gradient with respect to the negative sample vj is scaled by its relative contribution to the denominator in Eq. (1):
∂Lc ∂vj = ∂Lc ∂sij × ∂sij ∂vj = 1 τ × [softmaxk(sik/τ)]j × ∂sij ∂vj . (2)
As a result, for sufficiently small τ , the model minimises the cosine similarity to the nearest negatives in the embedding space, as softmax approaches an indicator function that selects the largest gradient. The optimum of this objective, in turn, is to distribute the embeddings as uniformly as possible over the sphere, as this reduces the average similarity between nearest neighbours, see also Figs. 1 and 3.
Semantic structure. In contrast, a large τ has been observed to induce more semantic structure in the representation space. However, while the effect of small τ has an intuitive explanation, the phenomenon that larger τ induce semantic structure is much more poorly understood and has mostly
been described empirically (Wang & Liu, 2021; Robinson et al., 2021). Specifically, note that for any given positive sample, all negatives are repelled from the anchor, with close-by samples receiving exponentially higher gradients. Nonetheless, for large τ , tightly packed semantic clusters emerge. However, if close-by negatives are heavily repelled, how can this be? Should the loss not be dominated by the hard-negative samples and thus break the semantic structure?
To better understand both phenomena, we propose to view the contrastive loss through the lens of average distance maximisation, which we describe in the following section.
3.2 CONTRASTIVE LEARNING AS AVERAGE DISTANCE MAXIMISATION
As discussed in the previous section, the parameter τ plays a crucial role in shaping the learning dynamics of contrastive learning. To understand this role better, in this section, we present a novel viewpoint on the mechanics of the contrastive loss that explain the observed model behaviour. In particular, and in contrast to Wang & Liu (2021) who focused on the impact of individual negatives, for this we discuss the cumulative impact that all negative samples have on the loss.
To do so, we express the summands Lic of the loss in terms of distances dij instead of similarities sij :
0 ≤ dij = 1− sij τ ≤ 2 τ and cii = exp(dii). (3)
This allows us to rewrite the loss Lic as
Lic = − log
( exp (−dii)
exp (−dii) + ∑ j ̸=i exp (−dij)
) = log 1 + cii∑ j ̸=i exp (−dij) . (4) As the effect cii of the positive sample for a given anchor is the same for all negatives, in the following we place a particular focus on the negatives and their relative influence on the loss in Eq. (4); for a discussion of the influence of positive samples, please see appendix A.4.
To understand the impact of the temperature τ , first note that the loss monotonically increases with the sum Si = ∑ j ̸=i exp(−dij) of exponential distances in Eq. (4). As log is a continuous, monotonic function, we base the following discussion on the impact of τ on the sum Si.
For small τ , the nearest neighbours of the anchor point dominate Si, as differences in similarity are amplified. As a result, the contrastive objective maximises the average distance to nearest neighbours, leading to a uniform distribution over the hypersphere, see Fig. 3. Since individual negatives dominate the loss, this argument is consistent with existing interpretations, e.g. Wang & Liu (2021), as described in the previous section.
For large τ , (e.g. τ ≥ 1), on the other hand, the contributions to the loss from a given negative are on the same order of magnitude for a wide range of cosine similarities. Hence, the constrastive objective can be thought of as maximising the average distance over a wider range of neighbours. Interestingly, since distant negatives will typically outnumber close negatives, the strongest cumulative contribution
to the contrastive loss will come from more distant samples, despite the fact that individually the strongest contributions will come from the closest samples. To visualise this, in Fig. 2a, we plot the contributions of individual samples depending on their distance, as well as the distribution of similarities sij to negatives over the entire dataset in Fig. 2b. Since the number of negatives at larger distances (e.g. sij ≈ 0.1) significantly outnumber close negatives (sij > 0.9), the peak of the cumulative contributions1 shifts towards lower similarities for larger τ , as can be seen in Fig. 2c; in fact, for τ→∞, the distribution of cumulative contributions approaches the distribution of negatives. Hence, the model can significantly decrease the loss by increasing the distance to relatively ‘easy negatives’ for much longer during training, i.e. to samples that are easily distinguishable from the anchor by simple patterns. Instead of learning ‘hard’ features that allow for better instance discrimination between hard negatives, the model will be biased to learn easy patterns that allow for group-wise discrimination and thereby increase the margin between clusters of samples. Note that since the clusters as a whole mutually repel each other, the model is optimised to find a trade-off between the expanding forces between hard negatives (i.e. within a cluster) and the compressing forces that arise due to the margin maximisation between easy negatives (i.e. between clusters).
Importantly, such a bias towards easy features can prevent the models from learning hard features— i.e. by focusing on group-wise discrimination, the model becomes agnostic to instance-specific features that would allow for a better instance discrimination (cf. Robinson et al. (2021)). In the following, we discuss how this might negatively impact rare classes in long-tailed distributions.
3.3 TEMPERATURE SCHEDULES FOR CONTRASTIVE LEARNING ON LONG-TAIL DATA
As discussed in Sec. 1, naturally occurring data typically exhibit long-tail distributions, with some classes occurring much more frequently than others; across the dataset, head classes appear frequently, whereas tail classes contain fewest number of samples. Since self-supervised learning methods are designed to learn representations from unlabelled data, it is important to investigate their performance on imbalanced datasets.
Claim: Tail classes benefit from instance discrimination. As discussed in Sec. 3.2, sufficiently large τ are required for semantic groups to emerge during contrastive learning as this emphasises group-wise discrimination. However, as shown by Robinson et al. (2021), this can come at the cost of encoding instance-specific features and thus hurt the models’ instance discrimination capabilities.
We hypothesise that this disproportionately affects tail classes, as tail classes consist of only relatively few instances to begin with. Their representations should thus remain distinguishable from most of their neighbours and not be grouped with other instances, which are likely of a different class. In contrast, since head classes are represented by many samples, grouping those will be advantageous.
To test this hypothesis, we propose to explicitly train head and tail classes with different τ , to emphasise group discrimination for the former while ensuring instance discrimination for the latter.
1To obtain the cumulative contributions, we group the negatives into 100 non-overlapping bins of size 0.02 depending on their distance to the anchor and report the sum of contributions of a given bin.
Published as a conference paper at ICLR 2023
kNN: 57.93 97.04 | 2.41
kNN: 61.73 98.22 | 10.84
kNN: 50.72 95.62 | 4.82
kNN: 76.36 97.22 | 77.10
kNN: 49.71 89.93 | 20.06
kNN: 49.24 93.20 | 10.03
kNN: 45.76 92.71 | 13.01
kNN: 70.91 93.68 | 68.27
KNN1 correspond Fig.5
Figure 3: Representations of a head and a tail class. Visualisation of the influence of τ on representations of two semantically close classes (trained with all 10 classes). Red: single head class and blue: single tail class from CIFAR10-LT. Small τ=0.1 promotes uniformity, while large τ=1.0 creates dense clusters. With τ{head/tail} we refer to coarse supervision described in Sec. 3.3 which separates tail from head classes. In black / red / blue, we respectively show the average kNN accuracy over all classes / the head class / the tail class.
Experiment: Controlling τ with coarse supervision. We experiment on CIFAR10-LT (a long-tail variant of CIFAR10 - see Sec. 4.1) in which we select a different τ depending on whether the anchor ui is from a head or a tail class, i.e. of the 5 most or least common classes. We chose a relatively large τ (τhead=1.0) for the 5 head classes to emphasise group-wise discrimination and a relatively small τ (τtail=0.1) for the 5 tail classes to encourage the model to learn instance-discriminating features.
As can be seen in Fig. 3, this simple manipulation of the contrastive loss indeed provides a significant benefit with respect to the semantic structure of the embedding space, despite only weakly supervising the learning by adjusting τ according to a coarse (frequent/infrequent) measure of class frequency.
In particular, in Fig. 3, we show the projections of a single head class and a single tail class onto the three leading PCA dimensions and the corresponding kNN accuracies. We would like to highlight the following results. First, without any supervision, we indeed find that the head class consistently performs better for larger values of τ (e.g. 1.0), whereas the tail class consistently benefits from smaller values for τ (e.g. 0.1). Second, when training the model according to the coarse τ supervision as described above, we are not only able to maintain the benefits of large τ values for the head class, but significantly outperform all constant τ versions for the tail class, which improves the overall model performance on all classes; detailed results for all classes are provided in the appendix.
Temperature Schedules (TS) without supervision. Such supervision with respect to the class frequency is, of course, generally not available when training on unlabelled data and these experiments are only designed to test the above claim and provide an intuition about the learning dynamics on long-tail data. However, we would like to point out that the supervision in these experiments is very coarse and only separates the unlabelled data into frequent and infrequent classes. Nonetheless, while the results are encouraging, they are, of course, based on additional, albeit coarse, labels. Therefore, in what follows, we present an unsupervised method that yields similar benefits.
In detail, we propose to modify τ according to a cosine schedule, such that it alternates between an upper (τ+) and a lower (τ−) bound at a fixed period length T :
τcos(t) = (τ+ − τ−)× (1 + cos(2π t/T ))/2 + τ− ; (5)
here, t denotes training epochs. This method is motivated by the observation that τ controls the trade-off between learning easily separable features and learning instance-specific features.
Arguably, however, the models should learn both types of features: i.e. the representation space should be structured according to easily separable features that (optimally) represent semantically meaningful group-wise patterns, whilst still allowing for instance discrimination within those groups.
Therefore, we propose to alternate between both objectives as in Eq. (5), to ensure that throughout training the model learns to encode instance-specific patterns, whilst also structuring the representation space along semantically meaningful features. Note that while we find a cosine schedule to work best and to be robust with respect to the choice for T (Sec. 4.3), we also evaluate alternatives. Even randomly sampling τ from the interval [τ−, τ+] improves the model performance. This indicates that the task switching between group-wise discrimination (large τ ) and instance discrimination (small τ ) is indeed the driving factor behind the performance improvements we observe.
4 EXPERIMENTAL RESULTS
In this section, we validate our hypothesis that simple manipulations of the temperature parameter in Eq. (1) lead to better performance for long-tailed data. First, we introduce our experimental setup in Sec. 4.1, then in Sec. 4.2 we discuss the results across three imbalanced datasets and, finally, we analyse different design choices of the framework through extensive ablation studies in Sec. 4.3.
4.1 IMPLEMENTATION DETAILS
Datasets. We consider long-tailed (LT) versions of the following three popular datasets for the experiments: CIFAR10-LT, CIFAR100-LT, and ImageNet100-LT. For most of the experiments, we follow the setting from SDCLR (Jiang et al., 2021). In case of CIFAR10-LT/CIFAR100-LT, the original datasets (Krizhevsky et al., 2009) consist of 60000 32x32 images sampled uniformly from 10 and 100 semantic classes, respectively, where 50000 images correspond to the training set and 10000 to a test set. Long-tail versions of the datasets are introduced by Cui et al. (2019) and consist of a subset of the original datasets with an exponential decay in the number of images per class. The imbalance ratio controls the uniformity of the dataset and is calculated as the ratio of the sizes of the biggest and the smallest classes. By default, we use an imbalance ratio 100 if not stated otherwise. Experiments in Tab. 1, Tab. 3 are the average over three runs with different permutations of classes. ImageNet100-LT is a subset of the original ImageNet-100 (Tian et al., 2020a) consisting of 100 classes for a total of 12.21k 256x256 images. The number of images per class varies from 1280 to 25.
Training. We use an SGD optimizer for all experiments with a weight decay of 1e-4. As for the learning rate, we utilize linear warm-up for 10 epochs that is followed by a cosine annealing schedule starting from 0.5. We train for 2000 epochs for CIFAR10-LT and CIFAR100-LT and 800 epochs for ImageNet100-LT. For CIFAR10-LT and CIFAR100-LT we use a ResNet18 (He et al., 2016) backbone. For ImageNet100-LT we use a ResNet50 (He et al., 2016) backbone. For both the MoCo (He et al., 2020) and the SimCLR (Chen et al., 2020a) experiments, we follow Jiang et al. (2021) and use the following augmentations: resized crop, color jitters, grey scale and horizontal flip. MoCo details: we use a dictionary of size 10000, a projection dimensionality of 128 and a projection head with one linear layer. SimCLR details: we train with a batch size of 512 and a projection head that has two layers with an output size of 128. For evaluation, we discard the projection head and apply l2-normalisation. Regarding the proposed temperature schedules (TS), we use a period length of T=400 with τ+=1.0 and τ−=0.1 if not stated otherwise; for more details, see appendix A.2.
Evaluation We use k nearest neighbours (kNN) and linear classifiers to assess the learned features. For kNN, we compute l2-normalised distances between LT samples from the train set and the classbalanced test set. For each test image, we assign it to the majority class among the top-k closest train images. We report accuracy for kNN with k=1 (kNN@1) and with k=10 (kNN@10). Compared to fine-tuning or linear probing, kNN directly evaluates the learned embedding since it relies on the learned metric and local structure of the space. We also evaluate the linear separability and generalisation of the space with a linear classifier that we train on the top of frozen backbone. For this, we consider two setups: balanced few-shot linear probing (FS LP) and long-tailed linear probing (LT LP). For FS LP, the few-shot train set is a direct subset of the original long-tailed train set with the shot number equal to the minimum class size in the original LT train set. For LT LP, we use the original LT training set. For extended tables, see appendix A.3.
4.2 EFFECTIVENESS OF TEMPERATURE SCHEDULES
Contrastive learning with TS. In Tab. 1 we present the efficacy of temperature schedules (TS) for two well-known contrastive learning frameworks MoCo (He et al., 2020) and SimCLR (Chen et al., 2020a). We find that both frameworks benefit from varying the temperature and we observe consistent improvements over all evaluation metrics for CIFAR10-LT and CIFAR100-LT, i.e. the local structure of the embedding space (kNN) and the global structure (linear probe) are both improved. Moreover, we show in Tab. 3 that our finding also transfers to ImageNet100-LT. Furthermore, in Tab. 2 we evaluate the performance of the proposed method on the CIFAR10 and CIFAR100 datasets with different imbalance ratios. An imbalance ratio of 50 (imb50) reflects less pronounced imbalance, and imb150 corresponds to the datasets with only 30 (CIFAR10) and 3 (CIFAR100) samples for the
smallest class. Varying τ during training improves the performance for different long-tailed data; for a discussion on the dependence of the improvement on the imbalance ratio, please see the appendix.
TS vs SDCLR. Further, we compare our method with SDCLR (Jiang et al., 2021). In SDCLR, SimCLR is modified s.t. the embeddings of the online model are contrasted with those of a pruned version of the same model, which is updated after every epoch. Since the pruning is done by simply masking the pruned weights of the original model, SDCLR requires twice as much memory compared to the original SimCLR and extra computational time to prune the model every epoch. In contrast, our method does not require any changes in the architecture or training. In Tab. 3 we show that this simple approach improves not only over the original SimCLR, but also over SDCLR in most metrics.
4.3 ABLATIONS
In this section, we evaluate how the hyperparameters in Eq. (5) can influence the model behaviour.
Cosine Boundaries. First, we vary the lower τ− and upper τ+ bounds of τ for the cosine schedule. In Tab. 4 we assess the performance of MoCo+TS with different τ− and τ+ on CIFAR10 with FS LP. We observe a clear trend that with a wider range of τ values the performance increases. We attribute this to the ability of the model to learn better ‘hard’ features with low τ and improve semantic structure for high τ . Note that 0.07 is the value for τ in many current contrastive learning methods.
Cosine Period. Further, we investigate if the length of the period T in Eq. (5) impacts the performance of the model. In Tab. 6, we show that modifying the temperature τ based on the cosine schedule is beneficial during training independently of the period T . The performance varies insignificantly depending on T and consistently improves over standard fixed τ=0.2, whereas the best performance we achieve with T=400. Even though the performance is stable with respect to the length of the period, it changes within one period as we show in Fig. 4. Here, we average the accuracy of one last full period over different models trained with different T and find that the models reach the best performance around 0.7T . Based on this observation, we recommend to stop training after (n− 0.3)T epochs, where n is the number of full periods.
Alternatives to Cosine Schedule. Additionally, we test different methods of varying the temperature parameter τ and report the results in Tab. 5: we examine a linearly oscillating (oscil) function, a step function, and random sampling. For the linear oscillations, we follow the same schedule as for the cosine version, as shown on the right of Tab. 5. For the step function, we change τ from a low (0.1) to a high (0.5) value and back every 200 epochs. For random, we uniformly sample values for τ from the range [0.1, 0.5]. In Tab. 5 we observe that both those methods for varying the τ value also improve the performance over the fixed temperature, while with the cosine schedule the model achieves the best performance. These results indicate that it is indeed the task switching between group-wise and instance-wise discrimination during training which is the driving factor for the observed improvements for unsupervised long-tail representation learning. We assume the reason why slow oscillation of the temperature performs better than fast (i.e. random) temperature changes is grounded in learning dynamics and the slow evolution of the embedding space during training.
5 CONCLUSION
In this work, we discover the surprising effectiveness of temperature schedules for self-supervised contrastive representation learning on imbalanced datasets. In particular, we find that a simple cosine schedule for τ consistently improves two state-of-the-art contrastive methods over several datasets and different imbalance ratios, without introducing any additional cost.
Importantly, our approach is based on a novel perspective on the contrastive loss, in which the average distance maximisation aspect is emphasised. This perspective sheds light on which samples dominate the contrastive loss and explains why large values for τ can lead to the emergence of tight clusters in the embedding space, despite the fact that individual instance always repel each other.
Specifically, we find that while a large τ is thus necessary to induce semantic structure, the concomitant focus on group-wise discrimination biases the model to encode easily separable features rather than instance-specific details. However, in long-tailed distributions, this can be particularly harmful to the most infrequent classes, as those require a higher degree of instance discrimination to remain distinguishable from the prevalent semantic categories. The proposed cosine schedule for τ overcomes this tension, by alternating between an emphasis on instance discrimination (small τ ) and group-wise discrimination (large τ ). As a result of this constant ‘task switching’, the model is trained to both structure the embedding space according to semantically meaningful features, whilst also encoding instance-specific details such that rare classes remain distinguishable from dominant ones.
ETHICS STATEMENT
The paper proposes an analysis and a method to improve the performance of self-supervised representation learning methods based on the contrastive loss. The method and investigation in this paper do not introduce any ethical issues to the field of representation learning, as it is decoupled from the training data. Nonetheless, we would like to point out that representation learning does not automatically prevent models from learning harmful biases from the training data and should not be used outside of research applications without thorough evaluation for fairness and bias.
ACKNOWLEDGEMENTS
C. R. is supported by VisualAI EP/T028572/1 and ERC-UNION-CoG-101001212.
A APPENDIX
A.1 PSEUDO-CODE FOR REPRODUCIBILITY OF COSINE SCHEDULE
Algorithm 1 Cosine Schedule Require: period T ≥ 0, τ− = 0.1, τ+ = 1.0
ep← current epoch tau← (τ+ − τ−)× (1+np.cos(2×np.pi×ep/T ))/2 + τ−
Insert algorithm 1 into your favourite contrastive learning framework to check it out!
A.2 IMPLEMENTATION DETAILS
Evaluation details. Following Jiang et al. (2021), we separate 5000 images for CIFAR10/100-LT as a validation set for each split. As we discussed in the main paper, the performance of the model depends on the relative position within a period T . Therefore we utilise the validation split to choose a checkpoint for further testing on the standard test splits for CIFAR10/100-LT. Precisely, for each dataset, we select the evaluation epoch for the checkpoint based only on the validation set of the first random split; the other splits of the same dataset are evaluated using the same number of epochs. Note that for ImageNet100-LT there is no validation split and we select the last checkpoint as in Jiang et al. (2021). For a fair comparison, we also reproduce the numbers from Jiang et al. (2021) in the same way.
Division into head, mid, and tail classes. Following Jiang et al. (2021), we divide all the classes into three categories: head classes are with the most number of samples, tail classes are with the least number of samples and mid are the rest. In particular, for CIFAR10-LT for each split there are 4 head classes, 3 mid classes, and 3 tail classes; for CIFAR100-LT there are 34 head classes, 33 mid classes, 33 tail classes; for ImageNet100-LT head classes are classes with more than 100 instances, tail classes have less than 20 instances per class, and mid are the rest.
A.3 EXTENDED RESULTS
Extension of Fig. 3 In Fig. 5 we provide full results of kNN accuracy on CIFAR10 when the model is trained with different fixed τ values and with coarse binary supervision. Especially tail classes are improved by instance discrimination (small τtail).
Head-mid-tail classes evaluation. In the following, we present a detailed comparison of SimCLR and SimCLR+TS on head, mid, and tail classes on CIFAR10-LT in Tab. 7, on CIFAR100-LT in Tab. 8 and on ImageNet100-LT in Tab. 9. We observe consistent improvement for all evaluation metrics for all types of classes over the three datasets.
Influence of TS on Uniform vs Long-Tailed Distributions. To further corroborate that TS particularly helpful for imbalanced data, we apply TS for the uniformly distributed data. In Tab. 10, we can observe that the cosine schedule yields significant and consistent gains for the long-tailed version of CIFAR10 (CIFAR10-LT), but not for the uniform one (CIFAR10-Uniform). We assume that both head classes and tail classes for long-tail distribution should be expected to benefit from a better separation between the two: on the one hand, the tail classes form better clusters and are thus easier to classify based on their neighbours, on the other hand, the clusters of the head classes are ’purified’, which should similarly improve performance. Weather, for the uniform distribution, we do not observe such influence of TS and the performance changes only marginally.
A.4 INFLUENCE OF THE POSITIVE SAMPLES ON CONTRASTIVE LEARNING
In Sec. 3.2, we particularly focused on the impact of the negative samples on the learning dynamics under the contrastive objective, as they likely are the driving factor with respect to the semantic structure. In fact, we find that the positive samples should have an inverse relation with the temperature τ and thus cannot explain the observed learning dynamics, as we discuss in the following.
To understand the impact of the positive samples, first note their role in the loss (same as Eq. (4)):
Lic = log (1 + ciiSi) . (6) In particular, cii scales the entire sum Si= ∑
j ̸=i exp(−dij). As such, encoding two augmentations of the same instance at a large distance is much more ‘costly’ for the model than encoding two different samples close to each other, as each and every summand Si is amplified by the corresponding cii. As a result, the model will be biased to ‘err on the safe side’ and become invariant to the augmentations, which has been one of the main motivations for introducing augmentations in contrastive learning in the first place, cf. Tian et al. (2020b); Chen et al. (2020a); Caron et al. (2020).
Consequently, the positive samples, of course, also influence the forming of clusters in the embedding space as they induce invariance with respect to augmentations. Note, however, that this does not contradict our analysis regarding the impact of negative samples, but rather corroborates it.
In particular, cii biases the model to become invariant to the applied augmentations for all values of τ ; in fact, for small τ , this invariance is even emphasised as cii increases for small τ and the influence of the negatives is diminished. Hence, if the augmentations were the main factor in inducing semantic structure in the embedding space, τ should have the opposite effect of the one we and many others (Wang & Liu, 2021; Zhang et al., 2022; 2021) observe.
Thus, instead of inducing semantic structure on their own, we believe the positive samples to rather play a critical role in influencing which features the model can rely on for grouping samples in the embedding space; for a detailed discussion of this phenomenon, see also Chen et al. (2021). | 1. What is the focus of the paper regarding contrastive learning objectives?
2. What are the strengths and weaknesses of the proposed approach, particularly in its experimental design and comparisons with other works?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any questions or concerns regarding the paper that the reviewer has? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
Contrastive learning objectives are a popular and effective method for self-supervised learning. Most variations of the contrastive objective have a static temperature hyperparameter
τ
. Previous work suggests that large values of
τ
lead to improved group-wise discrimination while small values lead to improved instance-wise discrimination. The authors study the effect of varying this temperature parameter in 'long-tail' settings with imbalanced data. They posit that long-tail class performance benefits from instance-wise discrimination ability and run initial 'coarse supervision' experiments to test this claim. Since this method requires (weak) labels, the authors also propose a dynamic temperature schedule that can be used in unsupervised settings. In a series of ablation experiments, they find that a cosine schedule that gradually switches between the two phases leads to the best performance.
Strengths And Weaknesses
Strengths:
The topic is interesting and offers a promising direction for deepening our theoretical understanding of contrastive learning.
The work is clear and easy to follow with the main theoretical claim explicitly laid out and experiments set-up to test it directly.
The experiments are conducted with a variety of models and datasets.
Weaknesses:
It is unclear to me whether the cosine schedule is actually the optimal one. While it clearly outperforms the rapidly changing schedules like the step function, how does it compare, for example, to a linear (oscillating) schedule?
The marginal performance gain from MoCo to MoCo+TS does not seem significantly higher in the imb 150 vs imb 50 case. It isn't entirely clear whether the results support the hypothesis posited initially by the authors, otherwise one might expect the marginal gain to be higher in the more imbalanced case. It would be useful to compare marginal performance gain on CIFAR10-LT and on CIFAR10 to confirm that the improvement from the temperature schedule is actually related to the data imbalance.
There have been a few recent papers studying temperature in contrastive learning. In particular, how does your dynamic temperature scaling compare to the method in 'Dynamic Temperature Scaling in Contrastive Self-supervised Learning for Sensor-based Human Activity Recognition' (Khaertdinov et al., 2022)?
Update after author response The authors have addressed these weaknesses to my satisfaction.
Clarity, Quality, Novelty And Reproducibility
The paper is clearly written. I especially appreciate that the main theoretical claim is explicitly laid out and tested.
I have some small concerns about novelty, as described above.
It seems like all relevant details are provided for reproducing the main results. |
ICLR | Title
Temperature Schedules for self-supervised contrastive methods on long-tail data
Abstract
Most approaches for self-supervised learning (SSL) are optimised on curated balanced datasets, e.g. ImageNet, despite the fact that natural data usually exhibits long-tail distributions. In this paper, we analyse the behaviour of one of the most popular variants of SSL, i.e. contrastive methods, on long-tail data. In particular, we investigate the role of the temperature parameter τ in the contrastive loss, by analysing the loss through the lens of average distance maximisation, and find that a large τ emphasises group-wise discrimination, whereas a small τ leads to a higher degree of instance discrimination. While τ has thus far been treated exclusively as a constant hyperparameter, in this work, we propose to employ a dynamic τ and show that a simple cosine schedule can yield significant improvements in the learnt representations. Such a schedule results in a constant ‘task switching’ between an emphasis on instance discrimination and group-wise discrimination and thereby ensures that the model learns both group-wise features, as well as instance-specific details. Since frequent classes benefit from the former, while infrequent classes require the latter, we find this method to consistently improve separation between the classes in long-tail data without any additional computational cost.
1 INTRODUCTION
Deep Neural Networks have shown remarkable capabilities at learning representations of their inputs that are useful for a variety of tasks. Especially since the advent of recent self-supervised learning (SSL) techniques, rapid progress towards learning universally useful representations has been made.
Currently, however, SSL on images is mainly carried out on benchmark datasets that have been constructed and curated for supervised learning (e.g. ImageNet (Deng et al., 2009), CIFAR (Krizhevsky et al., 2009), etc.). Although the labels of curated datasets are not explicitly used in SSL, the structure of the data still follows the predefined set of classes. In particular, the class-balanced nature of curated datasets could result in a learning signal for unsupervised methods. As such, these methods are often not evaluated in the settings they were designed for, i.e. learning from truly unlabelled data. Moreover, some methods (e.g. (Asano et al., 2019; Caron et al., 2020)) even explicitly enforce a uniform prior over the embedding or label space, which cannot be expected to hold for uncurated datasets.
In particular, uncurated, real-world data tends to follow long-tail distributions (Reed, 2001), in this paper, we analyse SSL methods on long-tailed data. Specifically, we analyse the behaviour of contrastive learning (CL) methods, which are among the most popular learning paradigms for SSL.
In CL, the models are trained such that embeddings of different samples are repelled, while embeddings of different ‘views’ (i.e. augmentations) of the same sample are attracted. The strength of those attractive and repelling forces between samples is controlled by a temperature parameter τ , which has been shown to play a crucial role in learning good representations (Chen et al., 2020c;a). To the best of our knowledge, τ has thus far almost exclusively been treated as a constant hyper-parameter.
In contrast, we employ a dynamic τ during training and show that this has a strong effect on the learned embedding space for long-tail distributions. In particular, by introducing a simple schedule for τ we consistently improve the representation quality across a wide range of settings. Crucially, these gains are obtained without additional costs and only require oscillating τ with a cosine schedule.
∗equal contribution. Code available at: github.com/annusha/temperature schedules
This mechanism is grounded in our novel understanding of the effect of temperature on the contrastive loss. In particular, we analyse the contrastive loss from an average distance maximisation perspective, which gives intuitive insights as to why a large temperature emphasises group-wise discrimination, whereas a small temperature leads to a higher degree of instance discrimination and more uniform distributions over the embedding space. Varying τ during training ensures that the model learns both group-wise and instance-specific features, resulting in better separation between head and tail classes.
Overall, our contributions are summarised as follows: • we carry out an extensive analysis of the effect of τ on imbalanced data; • we analyse the contrastive loss from an average distance perspective to understand the emergence of semantic structure; • we propose a simple yet effective temperature schedule that improves the performance across different settings; • we show that the proposed τ scheduling is robust and consistently improves the performance for different hyperparameter choices.
2 RELATED WORK
Self-supervised representation learning (SSL) from visual data is a quickly evolving field. Recent methods are based on various forms of comparing embeddings between transformations of input images. We divide current methods into two categories: contrastive learning (He et al., 2020; Chen et al., 2020c;a; Oord et al., 2018), and non-contrastive learning (Grill et al., 2020; Zbontar et al., 2021; Chen & He, 2021; Bardes et al., 2022; Wei et al., 2022; Gidaris et al., 2021; Asano et al., 2019; Caron et al., 2020; He et al., 2022). Our analysis concerns the structure and the properties of the embedding space of contrastive methods when training on imbalanced data. Consequently, this section focuses on contrastive learning methods, their analysis and application to imbalanced training datasets.
Contrastive Learning employs instance discrimination (Wu et al., 2018) to learn representations by forming positive pairs of images through augmentations and a loss formulation that maximises their similarity while simultaneously minimising the similarity to other samples. Methods such as MoCo (He et al., 2020; Chen et al., 2020c), SimCLR (Chen et al., 2020a;b), SwAV (Caron et al., 2020), CPC (Oord et al., 2018), CMC Tian et al. (2020a), and Whitening (Ermolov et al., 2021) have shown impressive representation quality and down-stream performance using this learning paradigm. CL has also found applications beyond SSL pre-training, such as multi-modal learning (Shvetsova et al., 2022), domain generalisation (Yao et al., 2022), semantic segmentation (Van Gansbeke et al., 2021), 3D point cloud understanding (Afham et al., 2022), and 3D face generation (Deng et al., 2020).
Negatives. The importance of negatives for contrastive learning is remarkable and noticed in many prior works (Wang et al., 2021; Yeh et al., 2021; Zhang et al., 2022; Iscen et al., 2018; Kalantidis et al., 2020; Robinson et al., 2020; Khaertdinov et al., 2022). Yeh et al. (2021) propose decoupled learning by removing the positive term from the denominator, Robinson et al. (2020) develop an unsupervised hard-negative sampling technique, Wang et al. (2021) propose to employ a triplet loss, and Zhang et al. (2022); Khaertdinov et al. (2022) propose to improve negative mining with the help of different temperatures for positive and negative samples that can be defined as input-independent or input-dependent functions, respectively. In contrast to explicitly choosing a specific subset of negatives, we discuss the Info-NCE loss (Oord et al., 2018) through the lens of an average distance perspective with respect to all negatives and show that the temperature parameter can be used to implicitly control the effective number of negatives.
Imbalanced Self-Supervised Learning. Learning on imbalanced data instead of curated balanced datasets is an important application since natural data commonly follows long-tailed distributions (Reed, 2001; Liu et al., 2019; Wang et al., 2017). In recent work, Kang et al. (2020), Yang & Xu (2020), Liu et al. (2021), Zhong et al. (2022), Gwilliam & Shrivastava (2022) discover that self-supervised learning generally allows to learn a more robust embedding space than a supervised counterpart. Tian et al. (2021) explore the down-stream performance of contrastive learning on standard benchmarks based on large-scale uncurated pre-training and propose a multi-stage distillation framework to overcome the shift in the distribution of image classes. Jiang et al. (2021); Zhou et al. (2022) propose to address the data imbalance by identifying and then emphasising tail samples during training in an unsupervised manner. For this, Jiang et al. (2021) compare the outputs of the trained model before and after pruning, assuming that tail samples are more easily ‘forgotten’ by the pruned model and can thus be identified. Zhou et al. (2022), use the loss value for each input to identify tail samples and then use stronger augmentations for those. Instead of modifying the architecture
or the training data of the underlying frameworks, we show that a simple approach—i.e. oscillating the temperature of the Info-NCE loss (Oord et al., 2018) to alternate between instance and group discrimination—can achieve similar performance improvements at a low cost.
Analysis of Contrastive Learning (CL). Given the success of CL in representation learning, it is essential to understand its properties. While some work analyses the interpretability of embedding spaces (Bau et al., 2017; Fong & Vedaldi, 2018; Laina et al., 2020; 2021), here the focus lies on understanding the structure and learning dynamics of the objective function such as in Saunshi et al. (2019); Tsai et al. (2020); Chen et al. (2021). E.g., Chen et al. (2021) study the role of the projection head, the impact of multi-object images, and a feature suppression phenomenon. Wen & Li (2021) analyse the feature learning process to understand the role of augmentations in CL. Robinson et al. (2021) find that an emphasis on instance discrimination can improve representation of some features at the cost of suppressing otherwise well-learned features. Wang & Isola (2020); Wang & Liu (2021) analyse the uniformity of the representations learned with CL. In particular, Wang & Liu (2021) focus on the impact of individual negatives and describe a uniformity-tolerance dilemma when choosing the temperature parameter. In this work, we rely on the previous findings, expand them to long-tailed data distributions and complement them with an understanding of the emergence of semantic structure.
3 METHOD
In the following, we describe our approach and analysis of contrastive learning on long-tailed data. For this, we will first review the core principles of contrastive learning for the case of uniform data (Sec. 3.1). In Sec. 3.2, we then place a particular focus on the temperature parameter τ in the contrastive loss and its impact on the learnt representations. Based on our analysis, in Sec. 3.3 we discuss how the choice of τ might negatively affect the learnt representation of rare classes in the case of long-tailed distributions. Following this, we describe a simple proof-of-concept based on additional coarse supervision to test our hypothesis. We then further develop temperature schedules (TS) that yield significant gains with respect to the separability of the learnt representations in Sec. 4.
3.1 CONTRASTIVE LEARNING
The Info-NCE loss is a popular objective for contrastive learning (CL) and has lead to impressive results for learning useful representations from unlabelled data (Oord et al., 2018; Wu et al., 2018; He et al., 2020; Chen et al., 2020a). Given a set of inputs {x1, . . . , xN}, and the cosine similarities sij between learnt representations ui=f(A(xi)) and vj=g(A(xj)) of the inputs, the loss is defined by:
Lc = N∑ i=1 − log exp (sii/τ) exp (sii/τ) + ∑ j ̸=i exp (sij/τ) . (1)
Here, A(·) applies a random augmentation to its input and f and g are deep neural networks. For a given xi, we will refer to ui as the anchor and to vj as a positive sample if i=j and as a negative if i ̸=j. Last, τ denotes the temperature of the Info-NCE loss and has been found to crucially impact the learnt representations of the model (Wang & Isola, 2020; Wang & Liu, 2021; Robinson et al., 2021).
Uniformity. Specifically, a small τ has been tied to more uniformly distributed representations, see Fig. 1. For example, Wang & Liu (2021) show that the loss is ‘hardness-aware’, i.e. negative samples closest to the anchor receive the highest gradient. In particular, for a given anchor, the gradient with respect to the negative sample vj is scaled by its relative contribution to the denominator in Eq. (1):
∂Lc ∂vj = ∂Lc ∂sij × ∂sij ∂vj = 1 τ × [softmaxk(sik/τ)]j × ∂sij ∂vj . (2)
As a result, for sufficiently small τ , the model minimises the cosine similarity to the nearest negatives in the embedding space, as softmax approaches an indicator function that selects the largest gradient. The optimum of this objective, in turn, is to distribute the embeddings as uniformly as possible over the sphere, as this reduces the average similarity between nearest neighbours, see also Figs. 1 and 3.
Semantic structure. In contrast, a large τ has been observed to induce more semantic structure in the representation space. However, while the effect of small τ has an intuitive explanation, the phenomenon that larger τ induce semantic structure is much more poorly understood and has mostly
been described empirically (Wang & Liu, 2021; Robinson et al., 2021). Specifically, note that for any given positive sample, all negatives are repelled from the anchor, with close-by samples receiving exponentially higher gradients. Nonetheless, for large τ , tightly packed semantic clusters emerge. However, if close-by negatives are heavily repelled, how can this be? Should the loss not be dominated by the hard-negative samples and thus break the semantic structure?
To better understand both phenomena, we propose to view the contrastive loss through the lens of average distance maximisation, which we describe in the following section.
3.2 CONTRASTIVE LEARNING AS AVERAGE DISTANCE MAXIMISATION
As discussed in the previous section, the parameter τ plays a crucial role in shaping the learning dynamics of contrastive learning. To understand this role better, in this section, we present a novel viewpoint on the mechanics of the contrastive loss that explain the observed model behaviour. In particular, and in contrast to Wang & Liu (2021) who focused on the impact of individual negatives, for this we discuss the cumulative impact that all negative samples have on the loss.
To do so, we express the summands Lic of the loss in terms of distances dij instead of similarities sij :
0 ≤ dij = 1− sij τ ≤ 2 τ and cii = exp(dii). (3)
This allows us to rewrite the loss Lic as
Lic = − log
( exp (−dii)
exp (−dii) + ∑ j ̸=i exp (−dij)
) = log 1 + cii∑ j ̸=i exp (−dij) . (4) As the effect cii of the positive sample for a given anchor is the same for all negatives, in the following we place a particular focus on the negatives and their relative influence on the loss in Eq. (4); for a discussion of the influence of positive samples, please see appendix A.4.
To understand the impact of the temperature τ , first note that the loss monotonically increases with the sum Si = ∑ j ̸=i exp(−dij) of exponential distances in Eq. (4). As log is a continuous, monotonic function, we base the following discussion on the impact of τ on the sum Si.
For small τ , the nearest neighbours of the anchor point dominate Si, as differences in similarity are amplified. As a result, the contrastive objective maximises the average distance to nearest neighbours, leading to a uniform distribution over the hypersphere, see Fig. 3. Since individual negatives dominate the loss, this argument is consistent with existing interpretations, e.g. Wang & Liu (2021), as described in the previous section.
For large τ , (e.g. τ ≥ 1), on the other hand, the contributions to the loss from a given negative are on the same order of magnitude for a wide range of cosine similarities. Hence, the constrastive objective can be thought of as maximising the average distance over a wider range of neighbours. Interestingly, since distant negatives will typically outnumber close negatives, the strongest cumulative contribution
to the contrastive loss will come from more distant samples, despite the fact that individually the strongest contributions will come from the closest samples. To visualise this, in Fig. 2a, we plot the contributions of individual samples depending on their distance, as well as the distribution of similarities sij to negatives over the entire dataset in Fig. 2b. Since the number of negatives at larger distances (e.g. sij ≈ 0.1) significantly outnumber close negatives (sij > 0.9), the peak of the cumulative contributions1 shifts towards lower similarities for larger τ , as can be seen in Fig. 2c; in fact, for τ→∞, the distribution of cumulative contributions approaches the distribution of negatives. Hence, the model can significantly decrease the loss by increasing the distance to relatively ‘easy negatives’ for much longer during training, i.e. to samples that are easily distinguishable from the anchor by simple patterns. Instead of learning ‘hard’ features that allow for better instance discrimination between hard negatives, the model will be biased to learn easy patterns that allow for group-wise discrimination and thereby increase the margin between clusters of samples. Note that since the clusters as a whole mutually repel each other, the model is optimised to find a trade-off between the expanding forces between hard negatives (i.e. within a cluster) and the compressing forces that arise due to the margin maximisation between easy negatives (i.e. between clusters).
Importantly, such a bias towards easy features can prevent the models from learning hard features— i.e. by focusing on group-wise discrimination, the model becomes agnostic to instance-specific features that would allow for a better instance discrimination (cf. Robinson et al. (2021)). In the following, we discuss how this might negatively impact rare classes in long-tailed distributions.
3.3 TEMPERATURE SCHEDULES FOR CONTRASTIVE LEARNING ON LONG-TAIL DATA
As discussed in Sec. 1, naturally occurring data typically exhibit long-tail distributions, with some classes occurring much more frequently than others; across the dataset, head classes appear frequently, whereas tail classes contain fewest number of samples. Since self-supervised learning methods are designed to learn representations from unlabelled data, it is important to investigate their performance on imbalanced datasets.
Claim: Tail classes benefit from instance discrimination. As discussed in Sec. 3.2, sufficiently large τ are required for semantic groups to emerge during contrastive learning as this emphasises group-wise discrimination. However, as shown by Robinson et al. (2021), this can come at the cost of encoding instance-specific features and thus hurt the models’ instance discrimination capabilities.
We hypothesise that this disproportionately affects tail classes, as tail classes consist of only relatively few instances to begin with. Their representations should thus remain distinguishable from most of their neighbours and not be grouped with other instances, which are likely of a different class. In contrast, since head classes are represented by many samples, grouping those will be advantageous.
To test this hypothesis, we propose to explicitly train head and tail classes with different τ , to emphasise group discrimination for the former while ensuring instance discrimination for the latter.
1To obtain the cumulative contributions, we group the negatives into 100 non-overlapping bins of size 0.02 depending on their distance to the anchor and report the sum of contributions of a given bin.
Published as a conference paper at ICLR 2023
kNN: 57.93 97.04 | 2.41
kNN: 61.73 98.22 | 10.84
kNN: 50.72 95.62 | 4.82
kNN: 76.36 97.22 | 77.10
kNN: 49.71 89.93 | 20.06
kNN: 49.24 93.20 | 10.03
kNN: 45.76 92.71 | 13.01
kNN: 70.91 93.68 | 68.27
KNN1 correspond Fig.5
Figure 3: Representations of a head and a tail class. Visualisation of the influence of τ on representations of two semantically close classes (trained with all 10 classes). Red: single head class and blue: single tail class from CIFAR10-LT. Small τ=0.1 promotes uniformity, while large τ=1.0 creates dense clusters. With τ{head/tail} we refer to coarse supervision described in Sec. 3.3 which separates tail from head classes. In black / red / blue, we respectively show the average kNN accuracy over all classes / the head class / the tail class.
Experiment: Controlling τ with coarse supervision. We experiment on CIFAR10-LT (a long-tail variant of CIFAR10 - see Sec. 4.1) in which we select a different τ depending on whether the anchor ui is from a head or a tail class, i.e. of the 5 most or least common classes. We chose a relatively large τ (τhead=1.0) for the 5 head classes to emphasise group-wise discrimination and a relatively small τ (τtail=0.1) for the 5 tail classes to encourage the model to learn instance-discriminating features.
As can be seen in Fig. 3, this simple manipulation of the contrastive loss indeed provides a significant benefit with respect to the semantic structure of the embedding space, despite only weakly supervising the learning by adjusting τ according to a coarse (frequent/infrequent) measure of class frequency.
In particular, in Fig. 3, we show the projections of a single head class and a single tail class onto the three leading PCA dimensions and the corresponding kNN accuracies. We would like to highlight the following results. First, without any supervision, we indeed find that the head class consistently performs better for larger values of τ (e.g. 1.0), whereas the tail class consistently benefits from smaller values for τ (e.g. 0.1). Second, when training the model according to the coarse τ supervision as described above, we are not only able to maintain the benefits of large τ values for the head class, but significantly outperform all constant τ versions for the tail class, which improves the overall model performance on all classes; detailed results for all classes are provided in the appendix.
Temperature Schedules (TS) without supervision. Such supervision with respect to the class frequency is, of course, generally not available when training on unlabelled data and these experiments are only designed to test the above claim and provide an intuition about the learning dynamics on long-tail data. However, we would like to point out that the supervision in these experiments is very coarse and only separates the unlabelled data into frequent and infrequent classes. Nonetheless, while the results are encouraging, they are, of course, based on additional, albeit coarse, labels. Therefore, in what follows, we present an unsupervised method that yields similar benefits.
In detail, we propose to modify τ according to a cosine schedule, such that it alternates between an upper (τ+) and a lower (τ−) bound at a fixed period length T :
τcos(t) = (τ+ − τ−)× (1 + cos(2π t/T ))/2 + τ− ; (5)
here, t denotes training epochs. This method is motivated by the observation that τ controls the trade-off between learning easily separable features and learning instance-specific features.
Arguably, however, the models should learn both types of features: i.e. the representation space should be structured according to easily separable features that (optimally) represent semantically meaningful group-wise patterns, whilst still allowing for instance discrimination within those groups.
Therefore, we propose to alternate between both objectives as in Eq. (5), to ensure that throughout training the model learns to encode instance-specific patterns, whilst also structuring the representation space along semantically meaningful features. Note that while we find a cosine schedule to work best and to be robust with respect to the choice for T (Sec. 4.3), we also evaluate alternatives. Even randomly sampling τ from the interval [τ−, τ+] improves the model performance. This indicates that the task switching between group-wise discrimination (large τ ) and instance discrimination (small τ ) is indeed the driving factor behind the performance improvements we observe.
4 EXPERIMENTAL RESULTS
In this section, we validate our hypothesis that simple manipulations of the temperature parameter in Eq. (1) lead to better performance for long-tailed data. First, we introduce our experimental setup in Sec. 4.1, then in Sec. 4.2 we discuss the results across three imbalanced datasets and, finally, we analyse different design choices of the framework through extensive ablation studies in Sec. 4.3.
4.1 IMPLEMENTATION DETAILS
Datasets. We consider long-tailed (LT) versions of the following three popular datasets for the experiments: CIFAR10-LT, CIFAR100-LT, and ImageNet100-LT. For most of the experiments, we follow the setting from SDCLR (Jiang et al., 2021). In case of CIFAR10-LT/CIFAR100-LT, the original datasets (Krizhevsky et al., 2009) consist of 60000 32x32 images sampled uniformly from 10 and 100 semantic classes, respectively, where 50000 images correspond to the training set and 10000 to a test set. Long-tail versions of the datasets are introduced by Cui et al. (2019) and consist of a subset of the original datasets with an exponential decay in the number of images per class. The imbalance ratio controls the uniformity of the dataset and is calculated as the ratio of the sizes of the biggest and the smallest classes. By default, we use an imbalance ratio 100 if not stated otherwise. Experiments in Tab. 1, Tab. 3 are the average over three runs with different permutations of classes. ImageNet100-LT is a subset of the original ImageNet-100 (Tian et al., 2020a) consisting of 100 classes for a total of 12.21k 256x256 images. The number of images per class varies from 1280 to 25.
Training. We use an SGD optimizer for all experiments with a weight decay of 1e-4. As for the learning rate, we utilize linear warm-up for 10 epochs that is followed by a cosine annealing schedule starting from 0.5. We train for 2000 epochs for CIFAR10-LT and CIFAR100-LT and 800 epochs for ImageNet100-LT. For CIFAR10-LT and CIFAR100-LT we use a ResNet18 (He et al., 2016) backbone. For ImageNet100-LT we use a ResNet50 (He et al., 2016) backbone. For both the MoCo (He et al., 2020) and the SimCLR (Chen et al., 2020a) experiments, we follow Jiang et al. (2021) and use the following augmentations: resized crop, color jitters, grey scale and horizontal flip. MoCo details: we use a dictionary of size 10000, a projection dimensionality of 128 and a projection head with one linear layer. SimCLR details: we train with a batch size of 512 and a projection head that has two layers with an output size of 128. For evaluation, we discard the projection head and apply l2-normalisation. Regarding the proposed temperature schedules (TS), we use a period length of T=400 with τ+=1.0 and τ−=0.1 if not stated otherwise; for more details, see appendix A.2.
Evaluation We use k nearest neighbours (kNN) and linear classifiers to assess the learned features. For kNN, we compute l2-normalised distances between LT samples from the train set and the classbalanced test set. For each test image, we assign it to the majority class among the top-k closest train images. We report accuracy for kNN with k=1 (kNN@1) and with k=10 (kNN@10). Compared to fine-tuning or linear probing, kNN directly evaluates the learned embedding since it relies on the learned metric and local structure of the space. We also evaluate the linear separability and generalisation of the space with a linear classifier that we train on the top of frozen backbone. For this, we consider two setups: balanced few-shot linear probing (FS LP) and long-tailed linear probing (LT LP). For FS LP, the few-shot train set is a direct subset of the original long-tailed train set with the shot number equal to the minimum class size in the original LT train set. For LT LP, we use the original LT training set. For extended tables, see appendix A.3.
4.2 EFFECTIVENESS OF TEMPERATURE SCHEDULES
Contrastive learning with TS. In Tab. 1 we present the efficacy of temperature schedules (TS) for two well-known contrastive learning frameworks MoCo (He et al., 2020) and SimCLR (Chen et al., 2020a). We find that both frameworks benefit from varying the temperature and we observe consistent improvements over all evaluation metrics for CIFAR10-LT and CIFAR100-LT, i.e. the local structure of the embedding space (kNN) and the global structure (linear probe) are both improved. Moreover, we show in Tab. 3 that our finding also transfers to ImageNet100-LT. Furthermore, in Tab. 2 we evaluate the performance of the proposed method on the CIFAR10 and CIFAR100 datasets with different imbalance ratios. An imbalance ratio of 50 (imb50) reflects less pronounced imbalance, and imb150 corresponds to the datasets with only 30 (CIFAR10) and 3 (CIFAR100) samples for the
smallest class. Varying τ during training improves the performance for different long-tailed data; for a discussion on the dependence of the improvement on the imbalance ratio, please see the appendix.
TS vs SDCLR. Further, we compare our method with SDCLR (Jiang et al., 2021). In SDCLR, SimCLR is modified s.t. the embeddings of the online model are contrasted with those of a pruned version of the same model, which is updated after every epoch. Since the pruning is done by simply masking the pruned weights of the original model, SDCLR requires twice as much memory compared to the original SimCLR and extra computational time to prune the model every epoch. In contrast, our method does not require any changes in the architecture or training. In Tab. 3 we show that this simple approach improves not only over the original SimCLR, but also over SDCLR in most metrics.
4.3 ABLATIONS
In this section, we evaluate how the hyperparameters in Eq. (5) can influence the model behaviour.
Cosine Boundaries. First, we vary the lower τ− and upper τ+ bounds of τ for the cosine schedule. In Tab. 4 we assess the performance of MoCo+TS with different τ− and τ+ on CIFAR10 with FS LP. We observe a clear trend that with a wider range of τ values the performance increases. We attribute this to the ability of the model to learn better ‘hard’ features with low τ and improve semantic structure for high τ . Note that 0.07 is the value for τ in many current contrastive learning methods.
Cosine Period. Further, we investigate if the length of the period T in Eq. (5) impacts the performance of the model. In Tab. 6, we show that modifying the temperature τ based on the cosine schedule is beneficial during training independently of the period T . The performance varies insignificantly depending on T and consistently improves over standard fixed τ=0.2, whereas the best performance we achieve with T=400. Even though the performance is stable with respect to the length of the period, it changes within one period as we show in Fig. 4. Here, we average the accuracy of one last full period over different models trained with different T and find that the models reach the best performance around 0.7T . Based on this observation, we recommend to stop training after (n− 0.3)T epochs, where n is the number of full periods.
Alternatives to Cosine Schedule. Additionally, we test different methods of varying the temperature parameter τ and report the results in Tab. 5: we examine a linearly oscillating (oscil) function, a step function, and random sampling. For the linear oscillations, we follow the same schedule as for the cosine version, as shown on the right of Tab. 5. For the step function, we change τ from a low (0.1) to a high (0.5) value and back every 200 epochs. For random, we uniformly sample values for τ from the range [0.1, 0.5]. In Tab. 5 we observe that both those methods for varying the τ value also improve the performance over the fixed temperature, while with the cosine schedule the model achieves the best performance. These results indicate that it is indeed the task switching between group-wise and instance-wise discrimination during training which is the driving factor for the observed improvements for unsupervised long-tail representation learning. We assume the reason why slow oscillation of the temperature performs better than fast (i.e. random) temperature changes is grounded in learning dynamics and the slow evolution of the embedding space during training.
5 CONCLUSION
In this work, we discover the surprising effectiveness of temperature schedules for self-supervised contrastive representation learning on imbalanced datasets. In particular, we find that a simple cosine schedule for τ consistently improves two state-of-the-art contrastive methods over several datasets and different imbalance ratios, without introducing any additional cost.
Importantly, our approach is based on a novel perspective on the contrastive loss, in which the average distance maximisation aspect is emphasised. This perspective sheds light on which samples dominate the contrastive loss and explains why large values for τ can lead to the emergence of tight clusters in the embedding space, despite the fact that individual instance always repel each other.
Specifically, we find that while a large τ is thus necessary to induce semantic structure, the concomitant focus on group-wise discrimination biases the model to encode easily separable features rather than instance-specific details. However, in long-tailed distributions, this can be particularly harmful to the most infrequent classes, as those require a higher degree of instance discrimination to remain distinguishable from the prevalent semantic categories. The proposed cosine schedule for τ overcomes this tension, by alternating between an emphasis on instance discrimination (small τ ) and group-wise discrimination (large τ ). As a result of this constant ‘task switching’, the model is trained to both structure the embedding space according to semantically meaningful features, whilst also encoding instance-specific details such that rare classes remain distinguishable from dominant ones.
ETHICS STATEMENT
The paper proposes an analysis and a method to improve the performance of self-supervised representation learning methods based on the contrastive loss. The method and investigation in this paper do not introduce any ethical issues to the field of representation learning, as it is decoupled from the training data. Nonetheless, we would like to point out that representation learning does not automatically prevent models from learning harmful biases from the training data and should not be used outside of research applications without thorough evaluation for fairness and bias.
ACKNOWLEDGEMENTS
C. R. is supported by VisualAI EP/T028572/1 and ERC-UNION-CoG-101001212.
A APPENDIX
A.1 PSEUDO-CODE FOR REPRODUCIBILITY OF COSINE SCHEDULE
Algorithm 1 Cosine Schedule Require: period T ≥ 0, τ− = 0.1, τ+ = 1.0
ep← current epoch tau← (τ+ − τ−)× (1+np.cos(2×np.pi×ep/T ))/2 + τ−
Insert algorithm 1 into your favourite contrastive learning framework to check it out!
A.2 IMPLEMENTATION DETAILS
Evaluation details. Following Jiang et al. (2021), we separate 5000 images for CIFAR10/100-LT as a validation set for each split. As we discussed in the main paper, the performance of the model depends on the relative position within a period T . Therefore we utilise the validation split to choose a checkpoint for further testing on the standard test splits for CIFAR10/100-LT. Precisely, for each dataset, we select the evaluation epoch for the checkpoint based only on the validation set of the first random split; the other splits of the same dataset are evaluated using the same number of epochs. Note that for ImageNet100-LT there is no validation split and we select the last checkpoint as in Jiang et al. (2021). For a fair comparison, we also reproduce the numbers from Jiang et al. (2021) in the same way.
Division into head, mid, and tail classes. Following Jiang et al. (2021), we divide all the classes into three categories: head classes are with the most number of samples, tail classes are with the least number of samples and mid are the rest. In particular, for CIFAR10-LT for each split there are 4 head classes, 3 mid classes, and 3 tail classes; for CIFAR100-LT there are 34 head classes, 33 mid classes, 33 tail classes; for ImageNet100-LT head classes are classes with more than 100 instances, tail classes have less than 20 instances per class, and mid are the rest.
A.3 EXTENDED RESULTS
Extension of Fig. 3 In Fig. 5 we provide full results of kNN accuracy on CIFAR10 when the model is trained with different fixed τ values and with coarse binary supervision. Especially tail classes are improved by instance discrimination (small τtail).
Head-mid-tail classes evaluation. In the following, we present a detailed comparison of SimCLR and SimCLR+TS on head, mid, and tail classes on CIFAR10-LT in Tab. 7, on CIFAR100-LT in Tab. 8 and on ImageNet100-LT in Tab. 9. We observe consistent improvement for all evaluation metrics for all types of classes over the three datasets.
Influence of TS on Uniform vs Long-Tailed Distributions. To further corroborate that TS particularly helpful for imbalanced data, we apply TS for the uniformly distributed data. In Tab. 10, we can observe that the cosine schedule yields significant and consistent gains for the long-tailed version of CIFAR10 (CIFAR10-LT), but not for the uniform one (CIFAR10-Uniform). We assume that both head classes and tail classes for long-tail distribution should be expected to benefit from a better separation between the two: on the one hand, the tail classes form better clusters and are thus easier to classify based on their neighbours, on the other hand, the clusters of the head classes are ’purified’, which should similarly improve performance. Weather, for the uniform distribution, we do not observe such influence of TS and the performance changes only marginally.
A.4 INFLUENCE OF THE POSITIVE SAMPLES ON CONTRASTIVE LEARNING
In Sec. 3.2, we particularly focused on the impact of the negative samples on the learning dynamics under the contrastive objective, as they likely are the driving factor with respect to the semantic structure. In fact, we find that the positive samples should have an inverse relation with the temperature τ and thus cannot explain the observed learning dynamics, as we discuss in the following.
To understand the impact of the positive samples, first note their role in the loss (same as Eq. (4)):
Lic = log (1 + ciiSi) . (6) In particular, cii scales the entire sum Si= ∑
j ̸=i exp(−dij). As such, encoding two augmentations of the same instance at a large distance is much more ‘costly’ for the model than encoding two different samples close to each other, as each and every summand Si is amplified by the corresponding cii. As a result, the model will be biased to ‘err on the safe side’ and become invariant to the augmentations, which has been one of the main motivations for introducing augmentations in contrastive learning in the first place, cf. Tian et al. (2020b); Chen et al. (2020a); Caron et al. (2020).
Consequently, the positive samples, of course, also influence the forming of clusters in the embedding space as they induce invariance with respect to augmentations. Note, however, that this does not contradict our analysis regarding the impact of negative samples, but rather corroborates it.
In particular, cii biases the model to become invariant to the applied augmentations for all values of τ ; in fact, for small τ , this invariance is even emphasised as cii increases for small τ and the influence of the negatives is diminished. Hence, if the augmentations were the main factor in inducing semantic structure in the embedding space, τ should have the opposite effect of the one we and many others (Wang & Liu, 2021; Zhang et al., 2022; 2021) observe.
Thus, instead of inducing semantic structure on their own, we believe the positive samples to rather play a critical role in influencing which features the model can rely on for grouping samples in the embedding space; for a detailed discussion of this phenomenon, see also Chen et al. (2021). | 1. What is the focus of the paper regarding long-tailed learning?
2. What are the strengths of the proposed approach, particularly in terms of its uniqueness and computational efficiency?
3. What are the weaknesses of the paper, especially in comparison to other long-tailed learning methods?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper tackles the long-tailed learning problem by varying softmax temperature during the course of training. The authors argued that different magnitudes of the temperature value actually induce different learning preferences, which makes it possible to gradually switch the learning objective by carefully setting up a temperature scheduler. The proposed method has been evaluted on serveral long-tailed learning benchmarks, demonstrated its effectiveness.
Strengths And Weaknesses
Strength:
The proposed method is interesting and is of a unique view.
The proposed method requires no computation overhead, which is a tempting merit.
Weaknesses:
The evaluation is not comprehensive. Only contrastive learning based methods are compared against.
The improvement is quite limited compared with other long-tailed learning methods.
Clarity, Quality, Novelty And Reproducibility
The overall quality is good, which is easy to follow and clearly written. Also the proposed method is novel. |
ICLR | Title
Temperature Schedules for self-supervised contrastive methods on long-tail data
Abstract
Most approaches for self-supervised learning (SSL) are optimised on curated balanced datasets, e.g. ImageNet, despite the fact that natural data usually exhibits long-tail distributions. In this paper, we analyse the behaviour of one of the most popular variants of SSL, i.e. contrastive methods, on long-tail data. In particular, we investigate the role of the temperature parameter τ in the contrastive loss, by analysing the loss through the lens of average distance maximisation, and find that a large τ emphasises group-wise discrimination, whereas a small τ leads to a higher degree of instance discrimination. While τ has thus far been treated exclusively as a constant hyperparameter, in this work, we propose to employ a dynamic τ and show that a simple cosine schedule can yield significant improvements in the learnt representations. Such a schedule results in a constant ‘task switching’ between an emphasis on instance discrimination and group-wise discrimination and thereby ensures that the model learns both group-wise features, as well as instance-specific details. Since frequent classes benefit from the former, while infrequent classes require the latter, we find this method to consistently improve separation between the classes in long-tail data without any additional computational cost.
1 INTRODUCTION
Deep Neural Networks have shown remarkable capabilities at learning representations of their inputs that are useful for a variety of tasks. Especially since the advent of recent self-supervised learning (SSL) techniques, rapid progress towards learning universally useful representations has been made.
Currently, however, SSL on images is mainly carried out on benchmark datasets that have been constructed and curated for supervised learning (e.g. ImageNet (Deng et al., 2009), CIFAR (Krizhevsky et al., 2009), etc.). Although the labels of curated datasets are not explicitly used in SSL, the structure of the data still follows the predefined set of classes. In particular, the class-balanced nature of curated datasets could result in a learning signal for unsupervised methods. As such, these methods are often not evaluated in the settings they were designed for, i.e. learning from truly unlabelled data. Moreover, some methods (e.g. (Asano et al., 2019; Caron et al., 2020)) even explicitly enforce a uniform prior over the embedding or label space, which cannot be expected to hold for uncurated datasets.
In particular, uncurated, real-world data tends to follow long-tail distributions (Reed, 2001), in this paper, we analyse SSL methods on long-tailed data. Specifically, we analyse the behaviour of contrastive learning (CL) methods, which are among the most popular learning paradigms for SSL.
In CL, the models are trained such that embeddings of different samples are repelled, while embeddings of different ‘views’ (i.e. augmentations) of the same sample are attracted. The strength of those attractive and repelling forces between samples is controlled by a temperature parameter τ , which has been shown to play a crucial role in learning good representations (Chen et al., 2020c;a). To the best of our knowledge, τ has thus far almost exclusively been treated as a constant hyper-parameter.
In contrast, we employ a dynamic τ during training and show that this has a strong effect on the learned embedding space for long-tail distributions. In particular, by introducing a simple schedule for τ we consistently improve the representation quality across a wide range of settings. Crucially, these gains are obtained without additional costs and only require oscillating τ with a cosine schedule.
∗equal contribution. Code available at: github.com/annusha/temperature schedules
This mechanism is grounded in our novel understanding of the effect of temperature on the contrastive loss. In particular, we analyse the contrastive loss from an average distance maximisation perspective, which gives intuitive insights as to why a large temperature emphasises group-wise discrimination, whereas a small temperature leads to a higher degree of instance discrimination and more uniform distributions over the embedding space. Varying τ during training ensures that the model learns both group-wise and instance-specific features, resulting in better separation between head and tail classes.
Overall, our contributions are summarised as follows: • we carry out an extensive analysis of the effect of τ on imbalanced data; • we analyse the contrastive loss from an average distance perspective to understand the emergence of semantic structure; • we propose a simple yet effective temperature schedule that improves the performance across different settings; • we show that the proposed τ scheduling is robust and consistently improves the performance for different hyperparameter choices.
2 RELATED WORK
Self-supervised representation learning (SSL) from visual data is a quickly evolving field. Recent methods are based on various forms of comparing embeddings between transformations of input images. We divide current methods into two categories: contrastive learning (He et al., 2020; Chen et al., 2020c;a; Oord et al., 2018), and non-contrastive learning (Grill et al., 2020; Zbontar et al., 2021; Chen & He, 2021; Bardes et al., 2022; Wei et al., 2022; Gidaris et al., 2021; Asano et al., 2019; Caron et al., 2020; He et al., 2022). Our analysis concerns the structure and the properties of the embedding space of contrastive methods when training on imbalanced data. Consequently, this section focuses on contrastive learning methods, their analysis and application to imbalanced training datasets.
Contrastive Learning employs instance discrimination (Wu et al., 2018) to learn representations by forming positive pairs of images through augmentations and a loss formulation that maximises their similarity while simultaneously minimising the similarity to other samples. Methods such as MoCo (He et al., 2020; Chen et al., 2020c), SimCLR (Chen et al., 2020a;b), SwAV (Caron et al., 2020), CPC (Oord et al., 2018), CMC Tian et al. (2020a), and Whitening (Ermolov et al., 2021) have shown impressive representation quality and down-stream performance using this learning paradigm. CL has also found applications beyond SSL pre-training, such as multi-modal learning (Shvetsova et al., 2022), domain generalisation (Yao et al., 2022), semantic segmentation (Van Gansbeke et al., 2021), 3D point cloud understanding (Afham et al., 2022), and 3D face generation (Deng et al., 2020).
Negatives. The importance of negatives for contrastive learning is remarkable and noticed in many prior works (Wang et al., 2021; Yeh et al., 2021; Zhang et al., 2022; Iscen et al., 2018; Kalantidis et al., 2020; Robinson et al., 2020; Khaertdinov et al., 2022). Yeh et al. (2021) propose decoupled learning by removing the positive term from the denominator, Robinson et al. (2020) develop an unsupervised hard-negative sampling technique, Wang et al. (2021) propose to employ a triplet loss, and Zhang et al. (2022); Khaertdinov et al. (2022) propose to improve negative mining with the help of different temperatures for positive and negative samples that can be defined as input-independent or input-dependent functions, respectively. In contrast to explicitly choosing a specific subset of negatives, we discuss the Info-NCE loss (Oord et al., 2018) through the lens of an average distance perspective with respect to all negatives and show that the temperature parameter can be used to implicitly control the effective number of negatives.
Imbalanced Self-Supervised Learning. Learning on imbalanced data instead of curated balanced datasets is an important application since natural data commonly follows long-tailed distributions (Reed, 2001; Liu et al., 2019; Wang et al., 2017). In recent work, Kang et al. (2020), Yang & Xu (2020), Liu et al. (2021), Zhong et al. (2022), Gwilliam & Shrivastava (2022) discover that self-supervised learning generally allows to learn a more robust embedding space than a supervised counterpart. Tian et al. (2021) explore the down-stream performance of contrastive learning on standard benchmarks based on large-scale uncurated pre-training and propose a multi-stage distillation framework to overcome the shift in the distribution of image classes. Jiang et al. (2021); Zhou et al. (2022) propose to address the data imbalance by identifying and then emphasising tail samples during training in an unsupervised manner. For this, Jiang et al. (2021) compare the outputs of the trained model before and after pruning, assuming that tail samples are more easily ‘forgotten’ by the pruned model and can thus be identified. Zhou et al. (2022), use the loss value for each input to identify tail samples and then use stronger augmentations for those. Instead of modifying the architecture
or the training data of the underlying frameworks, we show that a simple approach—i.e. oscillating the temperature of the Info-NCE loss (Oord et al., 2018) to alternate between instance and group discrimination—can achieve similar performance improvements at a low cost.
Analysis of Contrastive Learning (CL). Given the success of CL in representation learning, it is essential to understand its properties. While some work analyses the interpretability of embedding spaces (Bau et al., 2017; Fong & Vedaldi, 2018; Laina et al., 2020; 2021), here the focus lies on understanding the structure and learning dynamics of the objective function such as in Saunshi et al. (2019); Tsai et al. (2020); Chen et al. (2021). E.g., Chen et al. (2021) study the role of the projection head, the impact of multi-object images, and a feature suppression phenomenon. Wen & Li (2021) analyse the feature learning process to understand the role of augmentations in CL. Robinson et al. (2021) find that an emphasis on instance discrimination can improve representation of some features at the cost of suppressing otherwise well-learned features. Wang & Isola (2020); Wang & Liu (2021) analyse the uniformity of the representations learned with CL. In particular, Wang & Liu (2021) focus on the impact of individual negatives and describe a uniformity-tolerance dilemma when choosing the temperature parameter. In this work, we rely on the previous findings, expand them to long-tailed data distributions and complement them with an understanding of the emergence of semantic structure.
3 METHOD
In the following, we describe our approach and analysis of contrastive learning on long-tailed data. For this, we will first review the core principles of contrastive learning for the case of uniform data (Sec. 3.1). In Sec. 3.2, we then place a particular focus on the temperature parameter τ in the contrastive loss and its impact on the learnt representations. Based on our analysis, in Sec. 3.3 we discuss how the choice of τ might negatively affect the learnt representation of rare classes in the case of long-tailed distributions. Following this, we describe a simple proof-of-concept based on additional coarse supervision to test our hypothesis. We then further develop temperature schedules (TS) that yield significant gains with respect to the separability of the learnt representations in Sec. 4.
3.1 CONTRASTIVE LEARNING
The Info-NCE loss is a popular objective for contrastive learning (CL) and has lead to impressive results for learning useful representations from unlabelled data (Oord et al., 2018; Wu et al., 2018; He et al., 2020; Chen et al., 2020a). Given a set of inputs {x1, . . . , xN}, and the cosine similarities sij between learnt representations ui=f(A(xi)) and vj=g(A(xj)) of the inputs, the loss is defined by:
Lc = N∑ i=1 − log exp (sii/τ) exp (sii/τ) + ∑ j ̸=i exp (sij/τ) . (1)
Here, A(·) applies a random augmentation to its input and f and g are deep neural networks. For a given xi, we will refer to ui as the anchor and to vj as a positive sample if i=j and as a negative if i ̸=j. Last, τ denotes the temperature of the Info-NCE loss and has been found to crucially impact the learnt representations of the model (Wang & Isola, 2020; Wang & Liu, 2021; Robinson et al., 2021).
Uniformity. Specifically, a small τ has been tied to more uniformly distributed representations, see Fig. 1. For example, Wang & Liu (2021) show that the loss is ‘hardness-aware’, i.e. negative samples closest to the anchor receive the highest gradient. In particular, for a given anchor, the gradient with respect to the negative sample vj is scaled by its relative contribution to the denominator in Eq. (1):
∂Lc ∂vj = ∂Lc ∂sij × ∂sij ∂vj = 1 τ × [softmaxk(sik/τ)]j × ∂sij ∂vj . (2)
As a result, for sufficiently small τ , the model minimises the cosine similarity to the nearest negatives in the embedding space, as softmax approaches an indicator function that selects the largest gradient. The optimum of this objective, in turn, is to distribute the embeddings as uniformly as possible over the sphere, as this reduces the average similarity between nearest neighbours, see also Figs. 1 and 3.
Semantic structure. In contrast, a large τ has been observed to induce more semantic structure in the representation space. However, while the effect of small τ has an intuitive explanation, the phenomenon that larger τ induce semantic structure is much more poorly understood and has mostly
been described empirically (Wang & Liu, 2021; Robinson et al., 2021). Specifically, note that for any given positive sample, all negatives are repelled from the anchor, with close-by samples receiving exponentially higher gradients. Nonetheless, for large τ , tightly packed semantic clusters emerge. However, if close-by negatives are heavily repelled, how can this be? Should the loss not be dominated by the hard-negative samples and thus break the semantic structure?
To better understand both phenomena, we propose to view the contrastive loss through the lens of average distance maximisation, which we describe in the following section.
3.2 CONTRASTIVE LEARNING AS AVERAGE DISTANCE MAXIMISATION
As discussed in the previous section, the parameter τ plays a crucial role in shaping the learning dynamics of contrastive learning. To understand this role better, in this section, we present a novel viewpoint on the mechanics of the contrastive loss that explain the observed model behaviour. In particular, and in contrast to Wang & Liu (2021) who focused on the impact of individual negatives, for this we discuss the cumulative impact that all negative samples have on the loss.
To do so, we express the summands Lic of the loss in terms of distances dij instead of similarities sij :
0 ≤ dij = 1− sij τ ≤ 2 τ and cii = exp(dii). (3)
This allows us to rewrite the loss Lic as
Lic = − log
( exp (−dii)
exp (−dii) + ∑ j ̸=i exp (−dij)
) = log 1 + cii∑ j ̸=i exp (−dij) . (4) As the effect cii of the positive sample for a given anchor is the same for all negatives, in the following we place a particular focus on the negatives and their relative influence on the loss in Eq. (4); for a discussion of the influence of positive samples, please see appendix A.4.
To understand the impact of the temperature τ , first note that the loss monotonically increases with the sum Si = ∑ j ̸=i exp(−dij) of exponential distances in Eq. (4). As log is a continuous, monotonic function, we base the following discussion on the impact of τ on the sum Si.
For small τ , the nearest neighbours of the anchor point dominate Si, as differences in similarity are amplified. As a result, the contrastive objective maximises the average distance to nearest neighbours, leading to a uniform distribution over the hypersphere, see Fig. 3. Since individual negatives dominate the loss, this argument is consistent with existing interpretations, e.g. Wang & Liu (2021), as described in the previous section.
For large τ , (e.g. τ ≥ 1), on the other hand, the contributions to the loss from a given negative are on the same order of magnitude for a wide range of cosine similarities. Hence, the constrastive objective can be thought of as maximising the average distance over a wider range of neighbours. Interestingly, since distant negatives will typically outnumber close negatives, the strongest cumulative contribution
to the contrastive loss will come from more distant samples, despite the fact that individually the strongest contributions will come from the closest samples. To visualise this, in Fig. 2a, we plot the contributions of individual samples depending on their distance, as well as the distribution of similarities sij to negatives over the entire dataset in Fig. 2b. Since the number of negatives at larger distances (e.g. sij ≈ 0.1) significantly outnumber close negatives (sij > 0.9), the peak of the cumulative contributions1 shifts towards lower similarities for larger τ , as can be seen in Fig. 2c; in fact, for τ→∞, the distribution of cumulative contributions approaches the distribution of negatives. Hence, the model can significantly decrease the loss by increasing the distance to relatively ‘easy negatives’ for much longer during training, i.e. to samples that are easily distinguishable from the anchor by simple patterns. Instead of learning ‘hard’ features that allow for better instance discrimination between hard negatives, the model will be biased to learn easy patterns that allow for group-wise discrimination and thereby increase the margin between clusters of samples. Note that since the clusters as a whole mutually repel each other, the model is optimised to find a trade-off between the expanding forces between hard negatives (i.e. within a cluster) and the compressing forces that arise due to the margin maximisation between easy negatives (i.e. between clusters).
Importantly, such a bias towards easy features can prevent the models from learning hard features— i.e. by focusing on group-wise discrimination, the model becomes agnostic to instance-specific features that would allow for a better instance discrimination (cf. Robinson et al. (2021)). In the following, we discuss how this might negatively impact rare classes in long-tailed distributions.
3.3 TEMPERATURE SCHEDULES FOR CONTRASTIVE LEARNING ON LONG-TAIL DATA
As discussed in Sec. 1, naturally occurring data typically exhibit long-tail distributions, with some classes occurring much more frequently than others; across the dataset, head classes appear frequently, whereas tail classes contain fewest number of samples. Since self-supervised learning methods are designed to learn representations from unlabelled data, it is important to investigate their performance on imbalanced datasets.
Claim: Tail classes benefit from instance discrimination. As discussed in Sec. 3.2, sufficiently large τ are required for semantic groups to emerge during contrastive learning as this emphasises group-wise discrimination. However, as shown by Robinson et al. (2021), this can come at the cost of encoding instance-specific features and thus hurt the models’ instance discrimination capabilities.
We hypothesise that this disproportionately affects tail classes, as tail classes consist of only relatively few instances to begin with. Their representations should thus remain distinguishable from most of their neighbours and not be grouped with other instances, which are likely of a different class. In contrast, since head classes are represented by many samples, grouping those will be advantageous.
To test this hypothesis, we propose to explicitly train head and tail classes with different τ , to emphasise group discrimination for the former while ensuring instance discrimination for the latter.
1To obtain the cumulative contributions, we group the negatives into 100 non-overlapping bins of size 0.02 depending on their distance to the anchor and report the sum of contributions of a given bin.
Published as a conference paper at ICLR 2023
kNN: 57.93 97.04 | 2.41
kNN: 61.73 98.22 | 10.84
kNN: 50.72 95.62 | 4.82
kNN: 76.36 97.22 | 77.10
kNN: 49.71 89.93 | 20.06
kNN: 49.24 93.20 | 10.03
kNN: 45.76 92.71 | 13.01
kNN: 70.91 93.68 | 68.27
KNN1 correspond Fig.5
Figure 3: Representations of a head and a tail class. Visualisation of the influence of τ on representations of two semantically close classes (trained with all 10 classes). Red: single head class and blue: single tail class from CIFAR10-LT. Small τ=0.1 promotes uniformity, while large τ=1.0 creates dense clusters. With τ{head/tail} we refer to coarse supervision described in Sec. 3.3 which separates tail from head classes. In black / red / blue, we respectively show the average kNN accuracy over all classes / the head class / the tail class.
Experiment: Controlling τ with coarse supervision. We experiment on CIFAR10-LT (a long-tail variant of CIFAR10 - see Sec. 4.1) in which we select a different τ depending on whether the anchor ui is from a head or a tail class, i.e. of the 5 most or least common classes. We chose a relatively large τ (τhead=1.0) for the 5 head classes to emphasise group-wise discrimination and a relatively small τ (τtail=0.1) for the 5 tail classes to encourage the model to learn instance-discriminating features.
As can be seen in Fig. 3, this simple manipulation of the contrastive loss indeed provides a significant benefit with respect to the semantic structure of the embedding space, despite only weakly supervising the learning by adjusting τ according to a coarse (frequent/infrequent) measure of class frequency.
In particular, in Fig. 3, we show the projections of a single head class and a single tail class onto the three leading PCA dimensions and the corresponding kNN accuracies. We would like to highlight the following results. First, without any supervision, we indeed find that the head class consistently performs better for larger values of τ (e.g. 1.0), whereas the tail class consistently benefits from smaller values for τ (e.g. 0.1). Second, when training the model according to the coarse τ supervision as described above, we are not only able to maintain the benefits of large τ values for the head class, but significantly outperform all constant τ versions for the tail class, which improves the overall model performance on all classes; detailed results for all classes are provided in the appendix.
Temperature Schedules (TS) without supervision. Such supervision with respect to the class frequency is, of course, generally not available when training on unlabelled data and these experiments are only designed to test the above claim and provide an intuition about the learning dynamics on long-tail data. However, we would like to point out that the supervision in these experiments is very coarse and only separates the unlabelled data into frequent and infrequent classes. Nonetheless, while the results are encouraging, they are, of course, based on additional, albeit coarse, labels. Therefore, in what follows, we present an unsupervised method that yields similar benefits.
In detail, we propose to modify τ according to a cosine schedule, such that it alternates between an upper (τ+) and a lower (τ−) bound at a fixed period length T :
τcos(t) = (τ+ − τ−)× (1 + cos(2π t/T ))/2 + τ− ; (5)
here, t denotes training epochs. This method is motivated by the observation that τ controls the trade-off between learning easily separable features and learning instance-specific features.
Arguably, however, the models should learn both types of features: i.e. the representation space should be structured according to easily separable features that (optimally) represent semantically meaningful group-wise patterns, whilst still allowing for instance discrimination within those groups.
Therefore, we propose to alternate between both objectives as in Eq. (5), to ensure that throughout training the model learns to encode instance-specific patterns, whilst also structuring the representation space along semantically meaningful features. Note that while we find a cosine schedule to work best and to be robust with respect to the choice for T (Sec. 4.3), we also evaluate alternatives. Even randomly sampling τ from the interval [τ−, τ+] improves the model performance. This indicates that the task switching between group-wise discrimination (large τ ) and instance discrimination (small τ ) is indeed the driving factor behind the performance improvements we observe.
4 EXPERIMENTAL RESULTS
In this section, we validate our hypothesis that simple manipulations of the temperature parameter in Eq. (1) lead to better performance for long-tailed data. First, we introduce our experimental setup in Sec. 4.1, then in Sec. 4.2 we discuss the results across three imbalanced datasets and, finally, we analyse different design choices of the framework through extensive ablation studies in Sec. 4.3.
4.1 IMPLEMENTATION DETAILS
Datasets. We consider long-tailed (LT) versions of the following three popular datasets for the experiments: CIFAR10-LT, CIFAR100-LT, and ImageNet100-LT. For most of the experiments, we follow the setting from SDCLR (Jiang et al., 2021). In case of CIFAR10-LT/CIFAR100-LT, the original datasets (Krizhevsky et al., 2009) consist of 60000 32x32 images sampled uniformly from 10 and 100 semantic classes, respectively, where 50000 images correspond to the training set and 10000 to a test set. Long-tail versions of the datasets are introduced by Cui et al. (2019) and consist of a subset of the original datasets with an exponential decay in the number of images per class. The imbalance ratio controls the uniformity of the dataset and is calculated as the ratio of the sizes of the biggest and the smallest classes. By default, we use an imbalance ratio 100 if not stated otherwise. Experiments in Tab. 1, Tab. 3 are the average over three runs with different permutations of classes. ImageNet100-LT is a subset of the original ImageNet-100 (Tian et al., 2020a) consisting of 100 classes for a total of 12.21k 256x256 images. The number of images per class varies from 1280 to 25.
Training. We use an SGD optimizer for all experiments with a weight decay of 1e-4. As for the learning rate, we utilize linear warm-up for 10 epochs that is followed by a cosine annealing schedule starting from 0.5. We train for 2000 epochs for CIFAR10-LT and CIFAR100-LT and 800 epochs for ImageNet100-LT. For CIFAR10-LT and CIFAR100-LT we use a ResNet18 (He et al., 2016) backbone. For ImageNet100-LT we use a ResNet50 (He et al., 2016) backbone. For both the MoCo (He et al., 2020) and the SimCLR (Chen et al., 2020a) experiments, we follow Jiang et al. (2021) and use the following augmentations: resized crop, color jitters, grey scale and horizontal flip. MoCo details: we use a dictionary of size 10000, a projection dimensionality of 128 and a projection head with one linear layer. SimCLR details: we train with a batch size of 512 and a projection head that has two layers with an output size of 128. For evaluation, we discard the projection head and apply l2-normalisation. Regarding the proposed temperature schedules (TS), we use a period length of T=400 with τ+=1.0 and τ−=0.1 if not stated otherwise; for more details, see appendix A.2.
Evaluation We use k nearest neighbours (kNN) and linear classifiers to assess the learned features. For kNN, we compute l2-normalised distances between LT samples from the train set and the classbalanced test set. For each test image, we assign it to the majority class among the top-k closest train images. We report accuracy for kNN with k=1 (kNN@1) and with k=10 (kNN@10). Compared to fine-tuning or linear probing, kNN directly evaluates the learned embedding since it relies on the learned metric and local structure of the space. We also evaluate the linear separability and generalisation of the space with a linear classifier that we train on the top of frozen backbone. For this, we consider two setups: balanced few-shot linear probing (FS LP) and long-tailed linear probing (LT LP). For FS LP, the few-shot train set is a direct subset of the original long-tailed train set with the shot number equal to the minimum class size in the original LT train set. For LT LP, we use the original LT training set. For extended tables, see appendix A.3.
4.2 EFFECTIVENESS OF TEMPERATURE SCHEDULES
Contrastive learning with TS. In Tab. 1 we present the efficacy of temperature schedules (TS) for two well-known contrastive learning frameworks MoCo (He et al., 2020) and SimCLR (Chen et al., 2020a). We find that both frameworks benefit from varying the temperature and we observe consistent improvements over all evaluation metrics for CIFAR10-LT and CIFAR100-LT, i.e. the local structure of the embedding space (kNN) and the global structure (linear probe) are both improved. Moreover, we show in Tab. 3 that our finding also transfers to ImageNet100-LT. Furthermore, in Tab. 2 we evaluate the performance of the proposed method on the CIFAR10 and CIFAR100 datasets with different imbalance ratios. An imbalance ratio of 50 (imb50) reflects less pronounced imbalance, and imb150 corresponds to the datasets with only 30 (CIFAR10) and 3 (CIFAR100) samples for the
smallest class. Varying τ during training improves the performance for different long-tailed data; for a discussion on the dependence of the improvement on the imbalance ratio, please see the appendix.
TS vs SDCLR. Further, we compare our method with SDCLR (Jiang et al., 2021). In SDCLR, SimCLR is modified s.t. the embeddings of the online model are contrasted with those of a pruned version of the same model, which is updated after every epoch. Since the pruning is done by simply masking the pruned weights of the original model, SDCLR requires twice as much memory compared to the original SimCLR and extra computational time to prune the model every epoch. In contrast, our method does not require any changes in the architecture or training. In Tab. 3 we show that this simple approach improves not only over the original SimCLR, but also over SDCLR in most metrics.
4.3 ABLATIONS
In this section, we evaluate how the hyperparameters in Eq. (5) can influence the model behaviour.
Cosine Boundaries. First, we vary the lower τ− and upper τ+ bounds of τ for the cosine schedule. In Tab. 4 we assess the performance of MoCo+TS with different τ− and τ+ on CIFAR10 with FS LP. We observe a clear trend that with a wider range of τ values the performance increases. We attribute this to the ability of the model to learn better ‘hard’ features with low τ and improve semantic structure for high τ . Note that 0.07 is the value for τ in many current contrastive learning methods.
Cosine Period. Further, we investigate if the length of the period T in Eq. (5) impacts the performance of the model. In Tab. 6, we show that modifying the temperature τ based on the cosine schedule is beneficial during training independently of the period T . The performance varies insignificantly depending on T and consistently improves over standard fixed τ=0.2, whereas the best performance we achieve with T=400. Even though the performance is stable with respect to the length of the period, it changes within one period as we show in Fig. 4. Here, we average the accuracy of one last full period over different models trained with different T and find that the models reach the best performance around 0.7T . Based on this observation, we recommend to stop training after (n− 0.3)T epochs, where n is the number of full periods.
Alternatives to Cosine Schedule. Additionally, we test different methods of varying the temperature parameter τ and report the results in Tab. 5: we examine a linearly oscillating (oscil) function, a step function, and random sampling. For the linear oscillations, we follow the same schedule as for the cosine version, as shown on the right of Tab. 5. For the step function, we change τ from a low (0.1) to a high (0.5) value and back every 200 epochs. For random, we uniformly sample values for τ from the range [0.1, 0.5]. In Tab. 5 we observe that both those methods for varying the τ value also improve the performance over the fixed temperature, while with the cosine schedule the model achieves the best performance. These results indicate that it is indeed the task switching between group-wise and instance-wise discrimination during training which is the driving factor for the observed improvements for unsupervised long-tail representation learning. We assume the reason why slow oscillation of the temperature performs better than fast (i.e. random) temperature changes is grounded in learning dynamics and the slow evolution of the embedding space during training.
5 CONCLUSION
In this work, we discover the surprising effectiveness of temperature schedules for self-supervised contrastive representation learning on imbalanced datasets. In particular, we find that a simple cosine schedule for τ consistently improves two state-of-the-art contrastive methods over several datasets and different imbalance ratios, without introducing any additional cost.
Importantly, our approach is based on a novel perspective on the contrastive loss, in which the average distance maximisation aspect is emphasised. This perspective sheds light on which samples dominate the contrastive loss and explains why large values for τ can lead to the emergence of tight clusters in the embedding space, despite the fact that individual instance always repel each other.
Specifically, we find that while a large τ is thus necessary to induce semantic structure, the concomitant focus on group-wise discrimination biases the model to encode easily separable features rather than instance-specific details. However, in long-tailed distributions, this can be particularly harmful to the most infrequent classes, as those require a higher degree of instance discrimination to remain distinguishable from the prevalent semantic categories. The proposed cosine schedule for τ overcomes this tension, by alternating between an emphasis on instance discrimination (small τ ) and group-wise discrimination (large τ ). As a result of this constant ‘task switching’, the model is trained to both structure the embedding space according to semantically meaningful features, whilst also encoding instance-specific details such that rare classes remain distinguishable from dominant ones.
ETHICS STATEMENT
The paper proposes an analysis and a method to improve the performance of self-supervised representation learning methods based on the contrastive loss. The method and investigation in this paper do not introduce any ethical issues to the field of representation learning, as it is decoupled from the training data. Nonetheless, we would like to point out that representation learning does not automatically prevent models from learning harmful biases from the training data and should not be used outside of research applications without thorough evaluation for fairness and bias.
ACKNOWLEDGEMENTS
C. R. is supported by VisualAI EP/T028572/1 and ERC-UNION-CoG-101001212.
A APPENDIX
A.1 PSEUDO-CODE FOR REPRODUCIBILITY OF COSINE SCHEDULE
Algorithm 1 Cosine Schedule Require: period T ≥ 0, τ− = 0.1, τ+ = 1.0
ep← current epoch tau← (τ+ − τ−)× (1+np.cos(2×np.pi×ep/T ))/2 + τ−
Insert algorithm 1 into your favourite contrastive learning framework to check it out!
A.2 IMPLEMENTATION DETAILS
Evaluation details. Following Jiang et al. (2021), we separate 5000 images for CIFAR10/100-LT as a validation set for each split. As we discussed in the main paper, the performance of the model depends on the relative position within a period T . Therefore we utilise the validation split to choose a checkpoint for further testing on the standard test splits for CIFAR10/100-LT. Precisely, for each dataset, we select the evaluation epoch for the checkpoint based only on the validation set of the first random split; the other splits of the same dataset are evaluated using the same number of epochs. Note that for ImageNet100-LT there is no validation split and we select the last checkpoint as in Jiang et al. (2021). For a fair comparison, we also reproduce the numbers from Jiang et al. (2021) in the same way.
Division into head, mid, and tail classes. Following Jiang et al. (2021), we divide all the classes into three categories: head classes are with the most number of samples, tail classes are with the least number of samples and mid are the rest. In particular, for CIFAR10-LT for each split there are 4 head classes, 3 mid classes, and 3 tail classes; for CIFAR100-LT there are 34 head classes, 33 mid classes, 33 tail classes; for ImageNet100-LT head classes are classes with more than 100 instances, tail classes have less than 20 instances per class, and mid are the rest.
A.3 EXTENDED RESULTS
Extension of Fig. 3 In Fig. 5 we provide full results of kNN accuracy on CIFAR10 when the model is trained with different fixed τ values and with coarse binary supervision. Especially tail classes are improved by instance discrimination (small τtail).
Head-mid-tail classes evaluation. In the following, we present a detailed comparison of SimCLR and SimCLR+TS on head, mid, and tail classes on CIFAR10-LT in Tab. 7, on CIFAR100-LT in Tab. 8 and on ImageNet100-LT in Tab. 9. We observe consistent improvement for all evaluation metrics for all types of classes over the three datasets.
Influence of TS on Uniform vs Long-Tailed Distributions. To further corroborate that TS particularly helpful for imbalanced data, we apply TS for the uniformly distributed data. In Tab. 10, we can observe that the cosine schedule yields significant and consistent gains for the long-tailed version of CIFAR10 (CIFAR10-LT), but not for the uniform one (CIFAR10-Uniform). We assume that both head classes and tail classes for long-tail distribution should be expected to benefit from a better separation between the two: on the one hand, the tail classes form better clusters and are thus easier to classify based on their neighbours, on the other hand, the clusters of the head classes are ’purified’, which should similarly improve performance. Weather, for the uniform distribution, we do not observe such influence of TS and the performance changes only marginally.
A.4 INFLUENCE OF THE POSITIVE SAMPLES ON CONTRASTIVE LEARNING
In Sec. 3.2, we particularly focused on the impact of the negative samples on the learning dynamics under the contrastive objective, as they likely are the driving factor with respect to the semantic structure. In fact, we find that the positive samples should have an inverse relation with the temperature τ and thus cannot explain the observed learning dynamics, as we discuss in the following.
To understand the impact of the positive samples, first note their role in the loss (same as Eq. (4)):
Lic = log (1 + ciiSi) . (6) In particular, cii scales the entire sum Si= ∑
j ̸=i exp(−dij). As such, encoding two augmentations of the same instance at a large distance is much more ‘costly’ for the model than encoding two different samples close to each other, as each and every summand Si is amplified by the corresponding cii. As a result, the model will be biased to ‘err on the safe side’ and become invariant to the augmentations, which has been one of the main motivations for introducing augmentations in contrastive learning in the first place, cf. Tian et al. (2020b); Chen et al. (2020a); Caron et al. (2020).
Consequently, the positive samples, of course, also influence the forming of clusters in the embedding space as they induce invariance with respect to augmentations. Note, however, that this does not contradict our analysis regarding the impact of negative samples, but rather corroborates it.
In particular, cii biases the model to become invariant to the applied augmentations for all values of τ ; in fact, for small τ , this invariance is even emphasised as cii increases for small τ and the influence of the negatives is diminished. Hence, if the augmentations were the main factor in inducing semantic structure in the embedding space, τ should have the opposite effect of the one we and many others (Wang & Liu, 2021; Zhang et al., 2022; 2021) observe.
Thus, instead of inducing semantic structure on their own, we believe the positive samples to rather play a critical role in influencing which features the model can rely on for grouping samples in the embedding space; for a detailed discussion of this phenomenon, see also Chen et al. (2021). | 1. What is the focus of the paper regarding long-tail data?
2. What are the strengths of the proposed approach, particularly in its ability to improve imbalance pretraining?
3. What are the weaknesses of the paper, especially regarding interpretation and evaluation?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Note: These questions aim to guide the reader in understanding the review without being too specific or asking for sensitive information. They should not be used to elicit harmful or unethical responses. | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper explores self-supervised contrastive learning for long-tail data. It presents an empirical study of the impact of the temperature parameter on the head and tail categories. Based on the study, the paper proposes a simple yet effective solution to improve the imbalance pretraining which involves a dynamic temperature schedule. Compared to the vanilla contrastive learning and the SDCLR baseline, the proposed method demonstrates better performance on three long-tail benchmarks: CIFAR10-LT, CIFAR100-LT, ImageNet100-LT.
Strengths And Weaknesses
Strength
[S1] Topic: the paper investigates self-supervised learning for long-tail data. This is one of the first steps toward applying self-supervised approaches to uncurated data, which is of great interest to the community;
[S2] Insight: the paper provides an empirical analysis of the impact of the temperature parameter on the head and tail categories;
[S3] Technology: the proposed dynamic temperature scheduling is technically sound;
[S4] Performance: compared to the vanilla contrastive learning and the SDCLR baseline, the proposed method demonstrates better performance on three long-tail benchmarks: CIFAR10-LT, CIFAR100-LT, ImageNet100-LT.
[S5] Presentation: the writing is clear.
Weaknesses
[W1] Interpretation:
i) I find it difficult to interpret the results of TS. The study in Sec.3.3 and Fig. 3 seems to indicate that TS achieves a trade-off between optimizing the performances of the head or tail classes: as shown in Fig.3, TS slightly reduces the performance of the head classes but significantly improves the performance of the tail classes. On the other hand, results in table 7/8/9 show that TS consistently improves the performance of both the head and tail classes. The improvements on the head and tail classes are also comparable: +2.1%/+2.4% kNN@1 for the head/tail classes of ImageNet100-LT (table 9). It looks like TS is more than a simple trade-off?
ii) it is also unclear to me why decreasing the temperature during the training is the optimal scheduling solution (table 4 and 5). It looks like TS learns the head classes better at the early stage of the training then progressively prioritizes the tail classes. I wonder if TS may hurt the performance of the head classes at the later stage of the training. I would also be curious why alternative schedules perform worse than TS.
Overall, I’m happy to learn about the good performance of TS but the interpretation of TS deserves more investigation than what is presented in the current paper.
[W2] Evaluation. The results of SDCLR in table 3 look different from what were reported in the SDCLR paper. I wonder if these results are reproduced? If this is the case, I wonder if there is any difference in the experimental setting compared to the SDCLR paper. For example, the SDCLR paper reported results with 500 epochs pretraining while TS is trained with 800 epochs (Sec. 4.1), I wonder if different methods shown in table 3 share the same experimental configurations
[W3] The analysis in Sec. 3.2 and 3.3 is limited to contrastive learning. It is not clear to me if TS could be beneficial to non-contrastive SSL methods that use a temperature parameter, e.g. SwAV, DINO, etc.
Clarity, Quality, Novelty And Reproducibility
The paper is written clearly. The proposed method is not new but effective. The authors are suggested to include some clarifications on the experimental settings, as discussed in [W2]. |
ICLR | Title
The Predictron: End-To-End Learning and Planning
Abstract
One of the key challenges of artificial intelligence is to learn models that are effective in the context of planning. In this document we introduce the predictron architecture. The predictron consists of a fully abstract model, represented by a Markov reward process, that can be rolled forward multiple “imagined” planning steps. Each forward pass of the predictron accumulates internal rewards and values over multiple planning depths. The predictron is trained end-to-end so as to make these accumulated values accurately approximate the true value function. We applied the predictron to procedurally generated random mazes and a simulator for the game of pool. The predictron yielded significantly more accurate predictions than conventional deep neural network architectures.
1 INTRODUCTION
The central idea of model-based reinforcement learning is to decompose the RL problem into two subproblems: learning a model of the environment, and then planning with this model. The model is typically represented by a Markov reward process (MRP) or decision process (MDP). The planning component uses this model to evaluate and select among possible strategies. This is typically achieved by rolling forward the model to construct a value function that estimates cumulative reward. In prior work, the model is trained essentially independently of its use within the planner. As a result, the model is not well-matched with the overall objective of the agent. Prior deep reinforcement learning methods have successfully constructed models that can unroll near pixel-perfect reconstructions (Oh et al., 2015; Chiappa et al., 2016); but are yet to surpass state-of-the-art modelfree methods in challenging RL domains with raw inputs (e.g., Mnih et al., 2015; 2016; Lillicrap et al., 2016). In this paper we introduce a new architecture, which we call the predictron, that integrates learning and planning into one end-to-end training procedure. At every step, a model is applied to an internal state, to produce a next state, reward, discount, and value estimate. This model is completely abstract and its only goal is to facilitate accurate value prediction. For example, to plan effectively in a game, an agent must be able to predict the score. If our model makes accurate predictions, then an optimal plan with respect to our model will also be an optimal plan for the underlying game – even if that model uses a different state space (e.g., an abstract representation of enemy positions, ignoring their shapes and colours), action space (e.g., a high-level action to move away from an enemy), rewards (e.g., a single abstract step could have a higher value than any real reward), or even timestep (e.g., a single abstract step could “jump” the agent to the end of a corridor). All we require is that trajectories through the abstract model produce scores that are consistent with trajectories through the real environment. This is achieved by training the predictron end-to-end, so as to make its value estimates as accurate as possible. An ideal model could generalise to many different prediction tasks, rather than overfitting to a single task; and could learn from a rich variety of feedback signals, not just a single extrinsic reward. We therefore train the predictron to predict a host of different value functions for a variety of pseudoreward functions and discount factors. These pseudo-rewards can encode any event or aspect of the environment that the agent may care about, e.g., staying alive or reaching the next room. We focus upon the prediction task: estimating value functions in MRP environments with uncontrolled dynamics. In this case, the predictron can be implemented as a deep neural network with an
*Primary contributors
MRP as a recurrent core. The predictron unrolls this core multiple steps and accumulates rewards into an overall estimate of value. We applied the predictron to procedurally generated random mazes, and a simulated pool domain, directly from pixel inputs. In both cases, the predictron significantly outperformed model-free algorithms with conventional deep network architectures; and was much more robust to architectural choices such as depth.
2 BACKGROUND
We consider environments defined by an MRP with states s ∈ S. The MRP is defined by a function, s′, r, γ = p(s, α), where s′ is the next state, r is the reward, and γ is the discount factor, which can for instance represent the non-termination probability for this transition. The process may be stochastic, given IID noise α. The return of an MRP is the cumulative discounted reward over a single trajectory, gt = rt+1 + γt+1rt+2 +γt+1γt+2rt+3 + ... , where γt can vary per time-step. We consider a generalisation of the MRP setting that includes vector-valued rewards r, diagonal-matrix discounts γ , and vector-valued returns g; definitions are otherwise identical to the above. We use this bold font notation to closely match the more familiar scalar MRP case; the majority of the paper can be comfortably understood by reading all rewards as scalars, and all discount factors as scalar and constant, i.e., γt = γ. The value function of an MRP p is the expected return from state s, vp(s) = Ep [gt | st = s]. In the vector case, these are known as general value functions (Sutton et al., 2011). We will say that a (general) value function v(·) is consistent with environment p if and only if v = vp which satisfies the following Bellman equation (Bellman, 1957),
vp(s) = Ep [r + γvp(s′) | s] . (1)
In model-based reinforcement learning (Sutton and Barto, 1998), an approximation m ≈ p to the environment is learned. In the uncontrolled setting this model is normally an MRP s′, r, γ = m(s, β) that maps from state s to subsequent state s′ and additionally outputs rewards r and discounts γ ; the model may be stochastic given an IID source of noise β. A (general) value function vm(·) is consistent with model m (or valid, (Sutton, 1995)), if and only if it satisfies a Bellman equation vm(s) = Em [r + γvm(s′) | s] with respect to model m. Conventionally, model-based RL methods focus on finding a value function v that is consistent with a separately learned model m.
3 PREDICTRON ARCHITECTURE
The predictron is composed of four main components. First, a state representation s = f(s) that encodes raw input s (this could be a history of observations, in the partially observed setting, for example when f is a recurrent network) into an internal (abstract, hidden) state s. Second, a model s′, r, γ = m(s, β) that maps from internal state s to subsequent internal state s′, internal rewards r, and internal discounts γ . Third, a value function v that outputs internal values v = v(s) representing the future, internal return from internal state s onwards. The predictron is applied by unrolling its model m multiple “planning” steps to produce internal rewards, discounts and values. We use superscripts •k to indicate internal steps of the model (which have no necessary connection to time steps •t of the environment). Finally, these internal rewards, discounts and values are combined together by an accumulator into an overall estimate of value g. The whole predictron, from input state s to output g, may be viewed as a value function approximator for external targets (i.e. the returns in the real environment). We consider both k-step and λ-weighted accumulators. The k-step predictron rolls its internal model forward k steps. Specifically, the k-step predictron return gk (henceforth abbreviated as preturn) is the internal return obtained by accumulating k model steps, plus a final value vk from the kth step,
gk = r1 + γ1(r2 + γ2(. . . (rk−1 + γk−1(rk + γkvk)) . . .)). (2)
The 0-step preturn is simply the first value g0 = v0. The 1-step preturn is g1 = r1 + γ1v1, and so on (see Fig. 1a). The λ-predictron combines together many k-step preturns. Specifically, it computes a diagonal weight matrix λk from each internal state sk. The accumulator uses weights λ0, ...,λK to aggregate
over k-step preturns g0, ...,gK and output a combined value that we call the λ-preturn gλ,
gλ = K∑ k=0 wkgk where wk = (1− λk) ∏k−1 j=0 λ j if k < K ∏K−1 j=0 λ j otherwise. (3)
where 1 is the identity matrix. This λ-preturn is analogous to the λ-return in the forward-view TD(λ) algorithm (Sutton, 1988; Sutton and Barto, 1998). It may also be computed by a backward accumulation through intermediate steps gk,λ,
gk,λ = (1− λk)vk + λk ( rk+1 + γk+1gk+1,λ ) , (4)
where gK,λ = vK , and then using gλ = g0,λ. Computation in the λ-predictron operates in a sweep, iterating first through the model from k = 0 . . .K and then back through the accumulator from k = K . . . 0 in a single “forward” pass of the network (see Figure 1b). Each λk weight acts as a gate on the computation of the λ-preturn: a value of λk = 0 will truncate the λ-preturn at layer k, while a value of λk = 1 will utilise deeper layers based on additional steps of the model m; the final weight is always λK = 0. The individual λk weights may depend on the corresponding abstract state sk and can differ per prediction. This enables the predictron to compute to an adaptive depth (Graves, 2016) depending on the internal state and learning dynamics of the network.
4 PREDICTRON LEARNING UPDATES
We first consider updates that optimise the joint parameters θ of the state representation, model, and value function. We begin with the k-step predictron. We update the k-step predictron gk towards a target outcome g, such as the Monte-Carlo return from the real environment, by minimising a mean-squared error loss,
Lk = 1
2 ∥∥Ep [g | s]− Em [gk | s]∥∥2 . ∂lk ∂θ = ( g − gk ) ∂gk ∂θ . (5)
where lk = 12 ∥∥g − gk∥∥2 is the sample loss. We can use the gradient of the sample loss to update parameters, e.g. by stochastic gradient descent. For stochastic models, two independent samples are required for gk and ∂g k
∂θ to get unbiased samples for the gradient of L k.
The λ-predictron combines together many k-step preturns. To update the joint parameters θ, we can uniformly average the losses on the individual preturns gk,
L0:K = 1
2K K∑ k=0 ∥∥Ep [g | s]− Em [gk | s]∥∥2 , ∂l0:K ∂θ = 1 K K∑ k=0 ( g − gk ) ∂gk ∂θ . (6)
Alternative, we could weight each loss by the usage wk of the corresponding preturn, such that the gradient is ∑K k=0w k ( g − gk ) ∂gk
∂θ . The λ-predictron uses an accumulator with additional parameters η that determine the relative weighting of the k-step preturns. These weights are also updated so as to minimise a mean-squared error loss Lλ,
Lλ = 1
2 ∥∥Ep [g | s]− Em [gλ | s]∥∥2 , ∂lλ ∂η = ( g − gλ ) ∂gλ ∂η . (7)
In summary, the joint parameters θ of the state representation f , the modelm, and the value function v are updated to make each of the k-step preturns gk more similar to the target g, and the parameters η of the λ-accumulator are updated to make the aggregate λ-preturn gλ more similar to the target g.
4.1 CONSISTENCY (SEMI-SUPERVISED) LEARNING WITH THE λ-PREDICTRON
Ideally, the predictron (f,m, v) learns preturns that are all equal in expectation to the true value function of the environment, Em [ gk | s ] = Ep [gt | s] = vp(s), in which case the preturns must
be equal in expectation, Em [ g0 | s ] = Em [ g1 | s ] = ... = Em [ gK | s ] . In addition, each k-step
preturn must then be equal in expectation to the λ-preturn, Em [ gk | s ] = Em [ gλ | s ] , for any λ parameters. All these consistency relations between preturns give rise to additional constraints upon the predictron. Specifically, we may adjust the parameters of the predictron to lead to consistent preturns, even in the absence of labelled targets. Concretely, we can adjust each preturn gk towards the λ-preturn gλ; in other words, we can update each individual value estimate towards the best aggregated estimate by minimizing
L = 1
2 K∑ k=0 ∥∥Em [gλ | s]− Em [gk | s]∥∥2 , ∂l ∂θ = K∑ k=0 ( gλ − gk ) ∂gk ∂θ . (8)
Here gλ is considered fixed; the parameters θ are only updated to make gk more similar to gλ, not vice versa. This consistency update does not require any labels g or samples from the environment. As a result, it can be applied to (potentially hypothetical) states that have no associated ‘real’ (e.g. Monte-Carlo) outcome: we update the value estimates to be self-consistent with each other. Note the similarity with the semi-supervised setting, where we may have unlabelled inputs.
5 EXPERIMENTS
We conducted experiments on two domains. The first domain consists of randomly generated 20×20 mazes in which each location either is empty or contains a wall. Two locations in a maze are considered connected if they are both empty and we can reach one from the other by moving horizontally or vertically through adjacent empty cells. The goal is to predict, for each of the locations on the diagonal from top-left to bottom-right of the maze, whether the bottom-right corner is connected to that location, given the entire maze as an input image. Some of these predictions will be straightforward, for instance for locations on the diagonal that contain a wall themselves and for locations close to the bottom right. Many other predictive questions seem to require a simple algorithm, such as some form of a flood fill or search; our hypothesis is that an internal model can learn to emulate such algorithms, where naive approximation may struggle. A few example mazes are shown in Figure 2. Our second domain is a simulation of the game of pool, using four balls and four pockets. The simulator is implemented in the physics engine Mujoco (Todorov et al., 2012). We generate sequences of RGB frames starting from a random arrangement of balls on the table. The goal is to simultaneously learn to predict future events for each of the four balls, given 5 RGB frames as input. These events include: collision with any other ball, collision with any boundary of the table, entering a quadrant (×4, for each quadrant), being located in a quadrant (×4, for each quadrant), and entering a pocket
(×4, for each pocket). Each of these 14×4 events provides a binary pseudo-reward that we combine with 5 different discount factors {0, 0.5, 0.9, 0.98, 1} and predict their cumulative discounted sum over various time spans. This yields a total of 280 general value functions. An example trajectory is shown in Figure 2. In both domains, inputs are presented as minibatches of i.i.d. samples with their regression targets. Additional domain details are provided in Appendix E.
5.1 EXPLORING THE PREDICTRON ARCHITECTURE
Our first set of experiments examines three binary dimensions that differentiate the predictron from standard deep networks. We compare eight predictron variants corresponding to the corners of the cube on the left in Figure 3. The first dimension corresponds to whether or not the predictron architecture utilises the structure of an MRP model. In the MRP case, labelled r, γ, internal rewards and discounts are both learned. In the non-r, γ case, which corresponds to a vanilla hidden-to-hidden neural network module, internal rewards and discounts are ignored by fixing their values to rk = 0 and γk = 1. The second dimension is whether a K-step accumulator or λ-accumulator is used to aggregate over preturns. When a λ-accumulator is used, a λ-preturn is computed as described in Section 3. Otherwise, intermediate preturns are ignored by fixing their values to λk = 1 for k < K. In this case, the overall output of the predictron is simply the maximum-depth preturn gK . The third dimension, labelled usage weighting, defines the loss that is used to update the parameters θ. On this dimension, we consider two options: the preturn losses can either be weighted uniformly (see Equation 6), or the update for each preturn gk can be weighted according to the weightwk that determines how much it is used in the λ-predictron’s overall output. We call the latter loss ‘usage weighted‘. Note that for architectures without a λ-accumulator, wk = 0 for k < K, and wK = 1, thus usage weighting then implies backpropagating only the loss on the final preturn gK . All variants utilise a convolutional core with 2 intermediate hidden layers (see Appendix A); parameters were updated by supervised learning (see Appendix B for more details). Root mean squared prediction errors for each architecture, aggregated over all predictions, are shown in Figure 3. The
r,
w e ig h t sh arin g
skip connections (r, , )-predictron
ConvNet
recurrent ConvNet
ResNet
recurrent ResNet
usage w eighting
0 1M 2M 3M 4M 5M
0.0001
0.001
0.01
R M
S E o
n r
a n d o m
m a ze s (l o g s ca le )
Shared core
deep net deep net with skips (r, γ, λ)-predictron (r, γ, λ)-predictron with skips
0 1M 2M 3M 4M 5M
Unshared cores
0 500K 1M
Updates
0.2
0.3
0.4
R M
S E o
n p
o o l
0 500K 1M
Updates
Figure 4: Comparing predictron to baselines. Aggregated prediction errors on random mazes (top) and pool (bottom) over all predictions for the eight architectures corresponding to the cube on the left. Each line is the median of RMSE over five seeds; shaded regions encompass all seeds. The full (r, γ, λ)-predictron (red), consistently outperformed conventional deep network architectures (black), with and without skips and with and without weight sharing.
top row corresponds to the random mazes and the bottom row to the pool domain. The main conclusion is that learning an MRP model improved performance greatly. The inclusion of λ weights helped as well, especially on pool. Usage weighting further improved performance.
5.2 COMPARING THE PREDICTRON TO OTHER DEEP NETWORKS
Our second set of experiments compares the predictron to feedforward and recurrent deep learning architectures, with and without skip connections. We compare the corners of a new cube, as depicted on the left in Figure 4, based on three different binary dimensions. The first dimension of this second cube is whether we use a predictron, or a (non-λ, non-r, γ) deep network that does not have an internal model and does not output or learn from intermediate predictions. We use the most effective predictron from the previous section, i.e., the (r, γ, λ)-predictron with usage weighting. The second dimension is whether weights are shared between all cores (as in a recurrent network), or whether each core uses separate weights (as in a feedforward network). We note that the nonλ, non-r, γ variants of the predictron then correspond to standard (convolutional) feedforward and (unrolled) recurrent neural networks respectively. The third dimension is whether we include skip connections. This is equivalent to defining the model step to output a change to the current state, ∆s, and then defining sk+1 = h(sk + ∆sk), where h is the non-linear function—in our case a ReLU, h(x) = max(0, x). The deep network with skip connections is a variant of ResNet (He et al., 2015). Root mean squared prediction errors for each architecture are shown in Figure 4. All (r, γ, λ)predictrons (red lines) outperformed the corresponding feedforward or recurrent neural network baselines (black lines) both in the random mazes and in pool. We also investigated the effect of changing the depth of the networks (see Appendix C). The predictron outperformed the corresponding feedforward or recurrent baselines for all depths, with and without skip connections.
5.3 SEMI-SUPERVISED LEARNING BY CONSISTENCY
We now consider how to use the predictron for semi-supervised learning, training the model on a combination of labelled and unlabelled random mazes. Semi-supervised learning is important because a common bottleneck in applying machine learning in the real world is the difficulty of collecting labelled data, whereas often large quantities of unlabelled data exist. We trained a full (r, γ, λ)-predictron by alternating standard supervised updates with consistency updates, obtained by stochastically minimizing the consistency loss (8), on the unlabelled samples. For each supervised update we apply either 0, 1, or 9 consistency updates. Figure 5 shows that
the performance improved monotonically with the number of consistency updates, measured as a function of the number of labelled samples consumed.
5.4 ANALYSIS OF ADAPTIVE DEPTH
In principle, the predictron can adapt its depth to ‘think more’ about some predictions than others, perhaps depending on the complexity of the underlying target. We investigate this by looking at qualitatively different prediction types in pool: ball collisions, rail collisions, pocketing balls, and entering or staying in quadrants. For each prediction type we consider several different time-spans (determined by the real-world discount factors associated with each pseudo-reward). Figure 6 shows distributions of depth for each type of prediction. The ‘depth’ of a predictron is here defined as the effective number of model steps. If the predictron relies fully on the very first value (i.e., λ0 = 0), this counts as 0 steps. If, instead, it learns to place equal weight on all rewards and on the final value, this counts as 16 steps. Concretely, the depth d can be defined recursively as d = d0 where dk = λk(1 + γkdk+1) and dK = 0. Note that even for the same input state, each prediction has a separate depth. The depth distributions exhibit three properties. First, different types of predictions used different depths. Second, depth was correlated with the real-world discount for the first four prediction types. Third, the distributions are not strongly peaked, which implies that the depth can differ per input even for a single real-world discount and prediction type. In a control experiment (not shown) we used a scalar λ shared among all predictions, which reduced performance in all scenarios, indicating that the heterogeneous depth is a valuable form of flexibility.
5.5 VISUALIZING THE PREDICTIONS IN THE POOL DOMAIN
We test the quality of the predictions in the pool domain to evaluate whether they are well-suited to making decisions. For each sampled pool position, we consider a set I of different initial conditions (different angles and velocity of the white ball), and ask which is more likely to lead to pocketing coloured balls. For each initial condition s ∈ I , we apply the (r, γ, λ)-predictron (shared cores, 16 model steps, no skip connections) to obtain predictions gλ. We sum the predictions that correspond
to pocketing any ball except the white ball, and to real-world discounts γ = 0.98 and γ = 1. We select the condition s∗ that maximises this sum. We then roll forward the pool simulator from s∗ and log the number of pocketing events. Figure 2 shows a sampled rollout, using the predictron to pick s∗. When providing the choice of 128 angles and two velocities for initial conditions (|I| = 256), this procedure resulted in pocketing 27 coloured balls in 50 episodes. Using the same procedure with an equally deep convolutional network only resulted in 10 pocketing events. These results suggest that the lower loss of the learned (r, γ, λ)-predictron translated into meaningful improvements when informing decisions. A video of the rollouts selected by the predictron is available here: https://youtu.be/BeaLdaN2C3Q.
6 RELATED WORK
Lee et al. (2015) introduced a neural network architecture where classifications branch off intermediate hidden layers. An important difference with respect to the λ-predictron, is that the weights are hand-tuned as hyper-parameters, whereas in the predictron the λ weights are learnt and, more importantly, conditional on the input. Another difference is that the loss on the auxiliary classifications is used to speed up learning, but the classifications themselves are not combined into an aggregate prediction; the output of the model itself is the deepest prediction. Graves (2016) introduced an architecture with adaptive computation time (ACT), with a discrete (but differentiable) decision on when to halt, and aggregating over the outputs at each pondering step. This is related to our λ weights, but obtains depth in a different way; one notable difference is that the λ-predictron can choose different pondering depths for each of its predictions. Value iteration networks (VINs) (Tamar et al., 2016) also learn value functions end-to-end using an internal model, similar to the (non-λ) predictron. However, VINs plan via convolutional operations over the full input state space; whereas the predictron plans via imagined trajectories through an abstract state space. This may allow the predictron architecture to scale much more effectively in domains that do not have a natural two-dimensional encoding of the state space. The notion of learning about many predictions of the future relates to work on predictive state representations (PSRs; Littman et al., 2001), general value functions (GVFs; Sutton et al., 2011), and nexting (Modayil et al., 2012). Such predictions have been shown to be useful as representations (Schaul and Ring, 2013) and for transfer (Schaul et al., 2015). So far, however, none of these have been considered for learning abstract models. Schmidhuber (2015) discusses learning abstract models, but maintains separate losses for the model and a controller, and suggests training the model unsupervised to compactly encode the entire history of observations, through predictive coding. The predictron’s abstract model is instead trained endto-end to obtain accurate values.
7 CONCLUSION
The predictron is a single differentiable architecture that rolls forward an internal model to estimate external values. This internal model may be given both the structure and the semantics of traditional reinforcement learning models. But unlike most approaches to model-based reinforcement learning, the model is fully abstract: it need not correspond to the real environment in any human understandable fashion, so long as its rolled-forward “plans” accurately predict outcomes in the true environment. The predictron may be viewed as a novel network architecture that incorporates several separable ideas. First, the predictron outputs a value by accumulating rewards over a series of internal planning steps. Second, each forward pass of the predictron outputs values at multiple planning depths. Third, these values may be combined together, also within a single forward pass, to output an overall ensemble value. Finally, the different values output by the predictron may be encouraged to be self-consistent with each other, to provide an additional signal during learning. Our experiments demonstrate that these differences result in more accurate predictions of value, in reinforcement learning environments, than more conventional network architectures. We have focused on value prediction tasks in uncontrolled environments. However, these ideas may transfer to the control setting, for example by using the predictron as a Q-network (Mnih et al., 2015). Even more intriguing is the possibility of learning an internal MDP with abstract internal actions, rather than the MRP considered in this paper. We aim to explore these ideas in future work.
A ARCHITECTURE
The state representation f is a two-layer convolutional neural network (LeCun et al., 1998). There is a core c, again based on convolutions, that combines both MRP model and λ-network into a single repeatable module, such that sk+1, rk+1, γk+1,λk = c(sk). This core is deterministic, and is duplicated K times in the predictron with shared weights. (The predictron with unshared weights hasK distinct cores.) Finally, the value network v is a fully connected neural network that computes vk = v(sk). Concretely, the core (Figure 7) consists first of a convolutional layer that maps into an intermediate (hidden) layer. From this layer, another two convolutions compute the next abstract state of the predictron. Additionally, this same hidden layer is flattened and fed into three separate networks, with two fully connected layers each. The outputs of these three networks represent the internal rewards, discounts, and lambdas. A similar small network also hangs off the internal states, in addition to the core, and computes the values. All convolutions use 3×3 filters and a stride of one, and use padding to retain the size of the feature maps. All feature maps have 32 channels. The hidden layers within the MLPs have 32 hidden units. In Figure 7 the convolutional layers are schematically drawn with three channels, flattening is represented by curly brakets, while the arrows represent the small multi-layer perceptrons which compute values, rewards, discounts and lambdas. We allow up to 16 model steps in our experiments, resulting in 52-layer deep networks—two convolutional layers for the state representations, 3× 16 = 48 convolutional layers for the core steps, and two fully-connected layers for the values on top of the final state. Between each two layers we apply batch normalization (Ioffe and Szegedy, 2015) followed by a ReLU non-linearity (Glorot et al., 2011). The value and reward networks end with a linear layer, whereas the discount and λ-networks additionally add a sigmoid non-linearity to ensure that these quantities are in [0, 1].
B TRAINING
All experiments used the supervised (Monte-Carlo) update described in Section 4 except for the semi-supervised experiment which used the consistency update described in Section 4.1. We update all parameters by applying the Adam optimiser (Kingma and Ba, 2015) to stochastic gradients of the corresponding loss functions. Each return is normalised by dividing it by its standard deviation (as measured, prior to the experiment, on a set of 20,000 episodes). In all experiments, the learning rate was 0.001, and the other parameters of the Adam optimiser were β1 = 0.9, β2 = 0.999, and = 10−8. We used mini-batches of 100 samples.
C COMPARING ARCHITECTURES OF DIFFERENT DEPTHS
We investigated the effect of changing the depth of the networks, with and without skip connections. Figure 8 in shows that skip connections (dashed lines) make the conventional architectures
(black/grey lines) more robust to the depth (i.e., the black/grey dashed lines almost overlap, especially on pool), and that the predictron outperforms the corresponding feedforward or recurrent baselines for all depths, with and without skips.
D CAPACITY COMPARISONS
In this section, we present some additional experiments comparing the predictron to more conventional deep networks. The purposes of these experiments are 1) to show that the conclusions obtained above do not depend on the precise architecture used, and 2) to show that the structure of the network—whether we use a predictron or not—is more important than the raw number of parameters. Specifically, we again consider the same 20 by 20 random mazes, and the pool task described in the main text. As described in Section A, for the results in the paper we used an encoder that preserved the size of the input plans, 20 × 20 for the mazes and 28 × 28 for pool. Each convolution had 32 channels and therefore the abstract states were 20 × 20 × 32 for the mazes and 28 × 28 × 32 for pool. We now consider a different architecture, where we no longer pad the convolutions used in the encoder. For the mazes, we still use two layers of 3 × 3 stride-1 convolutions, which means the planes reduce in size to 16× 16. This means that the abstract states are about one third smaller. For pool, we use three 5×5 stride-1 convolutions, which bring us from 28×28 down to 16×16 as well. So, the abstract states are now of equal size for both experiments. For pool, this is approximately a two-thirds reduction, which helps reduce the compute needed to run the model. Most of the parameters in the predictron are in the fully connected layers. Previously, the first fully connected layer for each of the internal values, rewards, discounts, and λ-parameters would take a flattened abstract state, and then go into 32 hidden nodes. This means the number of parameters in this layer were 20× 20× 32× 32 = 409, 600 for the mazes and 28× 28× 32× 32 = 802, 816 for pool. The predictron with shared core would have four of these layers, one for each of the internal values, rewards, discounts, and λs, compared to one for the deep network which only has values. We change this in two ways. First, we add a 1× 1 convolution with a stride of 1 and 8 channels before the first fully connected layer for each of these outputs. This reduces the number of channels, and therefore the number of parameters in the subsequent fully-connected layer, by one fourth. Second, we tested three different numbers of hidden nodes: 32, 128, or 512.
The deep network with 128 hidden nodes for its values has the exact same number of parameters as the (r, γ, λ)-predictron with 32 hidden nodes for each of its outputs. Before, the deep network had fewer parameters, because we kept this number fixed at 32 across experiments. This opens the question of whether the improved performance of the predictron was not just an artifact of having more parameters. We tested this hypothesis, and the results are shown in Figure 9. Figure 9 shows that in each setting—on the mazes and pool, and with or without shared cores— both. The predictrons always performed better than all the deep networks. This includes the 32 node predictron (darkest red) compared to the 512 node deep network (lightest blue), even though the latter has approximately 4 times as many parameters (1.27M vs 4.85M). This means that the number of parameters mattered less than whether or not we use a predictron.
E ADDITIONAL DOMAIN DETAILS
We now provide some additional details of domains.
E.1 POOL
To generate sequences in the Pool domain, the initial locations of 4 balls of different colours are sampled at random. The white ball is the only one moving initially. Its velocity has a norm sampled uniformly between 7 and 14. The initial angle is sampled uniformly in the range (0, 2π). From the initial condition, the Mujoco simulation is run forward until all balls have stopped moving; sequences that last more than 151 frames are rejected, and a new one is generated as replacement. Each frame is rendered by Mujoco as a 280x280 RGB image, and subsequently downsampled through bilinear interpolation to a 28x28 RGB input (see Figure 10 for an example). Since the 280 signals described in Section 6.1 as targets for the Pool experiments have very different levels of sparsity, resulting in values
with very different scales, we have normalised the pseudo returns. The normalization procedure consisted in dividing all targets by their standard deviation, as empirically measured across an initial set of 20,000 sequences.
E.2 RANDOM MAZES
To generate mazes we first determine, with a stochastic line search, a number of walls so that the topleft corner is connected to the bottom-right corner (both always forced to be empty) in approximately 50% of the mazes. We then shuffle the walls uniformly randomly. For 20 by 20 mazes this means 70% of locations are empty and 30% contain walls. More than a googol different such 20-by-20 mazes exist (as ( 398 120 ) > 10100). | 1. What are the strengths and weaknesses of the proposed architecture for regression?
2. How does the proposed model compare to baseline models in terms of performance and number of parameters?
3. Are there any concerns regarding overfitting or model specification errors?
4. How does the proposed model perform on other standard regression problems, such as ImageNet or CIFAR?
5. What are some potential explanations for why the proposed model may perform well on certain tasks but not others? | Review | Review
I think there may be a nice paper to made from this, but as it is, it should not be accepted. The authors describe a new architecture for regression, inspired by techniques for estimating the value function of an Markov reward process. The connection is interesting, and there is certainly merit in the idea. However, the writing is confusing, and as far as I can tell, the experiments and discussion are inadequate. It is quite possible that I am misunderstanding some things, so I am not putting high confidence.
Because of all the discussion of MRP's and the background that inspired the model, it is difficult to see that the authors are in a pure, i.i.d. regression setting, where they sample inputs i.i.d. (with deterministic outputs given the input) from a distribution, and try to match a parameterized function to the input output pairs.
Because they are in this setting, there is a lot lacking from the experiments. For example, they report l2 loss on the maze problem; but not "percent correct"; indeed, it looks like the deep net with skips goes to about .001 average l2 loss on the 0-1 output maze problem. This is an issue because because it suggests that by simply thresholding the outputs, you could get nearly perfect results, which would point to a model specification error of the baseline. Are there sigmoids at the end of the baseline plain deep network? Note that the proposed models do have sigmoids in the outputs in the multiplicative weightings.
How do the number of parameters of the proposed network compare to the baselines? Is the better performance (and again, better is really marginal if I am understanding the way loss is measured) simply an issue of modeling power (perhaps because of the multiplicative connections of the proposed model vs. the baseline)? Because the input is taken i.i.d and the test distribution exactly matches the train, this is an important part of the discussion. Moreover, there do not seem to be experiments where the size of the training set is fixed- the axis in the graphs is number of samples seen, which is tied to the number of optimization steps. Thus there is no testing of over-fitting.
Why not try the model on more standard regression problems (as at heart, the paper seems to be about a new convnet architecture for regression)? Show imagenet or cifar accuracies, for example. If the proposed model does worse there, try to explain/understand what it is about the reported tasks that favor the proposed model?
**********************************************************************************
edited with increased confidence in post review discussions
********************************************************************************** |
ICLR | Title
The Predictron: End-To-End Learning and Planning
Abstract
One of the key challenges of artificial intelligence is to learn models that are effective in the context of planning. In this document we introduce the predictron architecture. The predictron consists of a fully abstract model, represented by a Markov reward process, that can be rolled forward multiple “imagined” planning steps. Each forward pass of the predictron accumulates internal rewards and values over multiple planning depths. The predictron is trained end-to-end so as to make these accumulated values accurately approximate the true value function. We applied the predictron to procedurally generated random mazes and a simulator for the game of pool. The predictron yielded significantly more accurate predictions than conventional deep neural network architectures.
1 INTRODUCTION
The central idea of model-based reinforcement learning is to decompose the RL problem into two subproblems: learning a model of the environment, and then planning with this model. The model is typically represented by a Markov reward process (MRP) or decision process (MDP). The planning component uses this model to evaluate and select among possible strategies. This is typically achieved by rolling forward the model to construct a value function that estimates cumulative reward. In prior work, the model is trained essentially independently of its use within the planner. As a result, the model is not well-matched with the overall objective of the agent. Prior deep reinforcement learning methods have successfully constructed models that can unroll near pixel-perfect reconstructions (Oh et al., 2015; Chiappa et al., 2016); but are yet to surpass state-of-the-art modelfree methods in challenging RL domains with raw inputs (e.g., Mnih et al., 2015; 2016; Lillicrap et al., 2016). In this paper we introduce a new architecture, which we call the predictron, that integrates learning and planning into one end-to-end training procedure. At every step, a model is applied to an internal state, to produce a next state, reward, discount, and value estimate. This model is completely abstract and its only goal is to facilitate accurate value prediction. For example, to plan effectively in a game, an agent must be able to predict the score. If our model makes accurate predictions, then an optimal plan with respect to our model will also be an optimal plan for the underlying game – even if that model uses a different state space (e.g., an abstract representation of enemy positions, ignoring their shapes and colours), action space (e.g., a high-level action to move away from an enemy), rewards (e.g., a single abstract step could have a higher value than any real reward), or even timestep (e.g., a single abstract step could “jump” the agent to the end of a corridor). All we require is that trajectories through the abstract model produce scores that are consistent with trajectories through the real environment. This is achieved by training the predictron end-to-end, so as to make its value estimates as accurate as possible. An ideal model could generalise to many different prediction tasks, rather than overfitting to a single task; and could learn from a rich variety of feedback signals, not just a single extrinsic reward. We therefore train the predictron to predict a host of different value functions for a variety of pseudoreward functions and discount factors. These pseudo-rewards can encode any event or aspect of the environment that the agent may care about, e.g., staying alive or reaching the next room. We focus upon the prediction task: estimating value functions in MRP environments with uncontrolled dynamics. In this case, the predictron can be implemented as a deep neural network with an
*Primary contributors
MRP as a recurrent core. The predictron unrolls this core multiple steps and accumulates rewards into an overall estimate of value. We applied the predictron to procedurally generated random mazes, and a simulated pool domain, directly from pixel inputs. In both cases, the predictron significantly outperformed model-free algorithms with conventional deep network architectures; and was much more robust to architectural choices such as depth.
2 BACKGROUND
We consider environments defined by an MRP with states s ∈ S. The MRP is defined by a function, s′, r, γ = p(s, α), where s′ is the next state, r is the reward, and γ is the discount factor, which can for instance represent the non-termination probability for this transition. The process may be stochastic, given IID noise α. The return of an MRP is the cumulative discounted reward over a single trajectory, gt = rt+1 + γt+1rt+2 +γt+1γt+2rt+3 + ... , where γt can vary per time-step. We consider a generalisation of the MRP setting that includes vector-valued rewards r, diagonal-matrix discounts γ , and vector-valued returns g; definitions are otherwise identical to the above. We use this bold font notation to closely match the more familiar scalar MRP case; the majority of the paper can be comfortably understood by reading all rewards as scalars, and all discount factors as scalar and constant, i.e., γt = γ. The value function of an MRP p is the expected return from state s, vp(s) = Ep [gt | st = s]. In the vector case, these are known as general value functions (Sutton et al., 2011). We will say that a (general) value function v(·) is consistent with environment p if and only if v = vp which satisfies the following Bellman equation (Bellman, 1957),
vp(s) = Ep [r + γvp(s′) | s] . (1)
In model-based reinforcement learning (Sutton and Barto, 1998), an approximation m ≈ p to the environment is learned. In the uncontrolled setting this model is normally an MRP s′, r, γ = m(s, β) that maps from state s to subsequent state s′ and additionally outputs rewards r and discounts γ ; the model may be stochastic given an IID source of noise β. A (general) value function vm(·) is consistent with model m (or valid, (Sutton, 1995)), if and only if it satisfies a Bellman equation vm(s) = Em [r + γvm(s′) | s] with respect to model m. Conventionally, model-based RL methods focus on finding a value function v that is consistent with a separately learned model m.
3 PREDICTRON ARCHITECTURE
The predictron is composed of four main components. First, a state representation s = f(s) that encodes raw input s (this could be a history of observations, in the partially observed setting, for example when f is a recurrent network) into an internal (abstract, hidden) state s. Second, a model s′, r, γ = m(s, β) that maps from internal state s to subsequent internal state s′, internal rewards r, and internal discounts γ . Third, a value function v that outputs internal values v = v(s) representing the future, internal return from internal state s onwards. The predictron is applied by unrolling its model m multiple “planning” steps to produce internal rewards, discounts and values. We use superscripts •k to indicate internal steps of the model (which have no necessary connection to time steps •t of the environment). Finally, these internal rewards, discounts and values are combined together by an accumulator into an overall estimate of value g. The whole predictron, from input state s to output g, may be viewed as a value function approximator for external targets (i.e. the returns in the real environment). We consider both k-step and λ-weighted accumulators. The k-step predictron rolls its internal model forward k steps. Specifically, the k-step predictron return gk (henceforth abbreviated as preturn) is the internal return obtained by accumulating k model steps, plus a final value vk from the kth step,
gk = r1 + γ1(r2 + γ2(. . . (rk−1 + γk−1(rk + γkvk)) . . .)). (2)
The 0-step preturn is simply the first value g0 = v0. The 1-step preturn is g1 = r1 + γ1v1, and so on (see Fig. 1a). The λ-predictron combines together many k-step preturns. Specifically, it computes a diagonal weight matrix λk from each internal state sk. The accumulator uses weights λ0, ...,λK to aggregate
over k-step preturns g0, ...,gK and output a combined value that we call the λ-preturn gλ,
gλ = K∑ k=0 wkgk where wk = (1− λk) ∏k−1 j=0 λ j if k < K ∏K−1 j=0 λ j otherwise. (3)
where 1 is the identity matrix. This λ-preturn is analogous to the λ-return in the forward-view TD(λ) algorithm (Sutton, 1988; Sutton and Barto, 1998). It may also be computed by a backward accumulation through intermediate steps gk,λ,
gk,λ = (1− λk)vk + λk ( rk+1 + γk+1gk+1,λ ) , (4)
where gK,λ = vK , and then using gλ = g0,λ. Computation in the λ-predictron operates in a sweep, iterating first through the model from k = 0 . . .K and then back through the accumulator from k = K . . . 0 in a single “forward” pass of the network (see Figure 1b). Each λk weight acts as a gate on the computation of the λ-preturn: a value of λk = 0 will truncate the λ-preturn at layer k, while a value of λk = 1 will utilise deeper layers based on additional steps of the model m; the final weight is always λK = 0. The individual λk weights may depend on the corresponding abstract state sk and can differ per prediction. This enables the predictron to compute to an adaptive depth (Graves, 2016) depending on the internal state and learning dynamics of the network.
4 PREDICTRON LEARNING UPDATES
We first consider updates that optimise the joint parameters θ of the state representation, model, and value function. We begin with the k-step predictron. We update the k-step predictron gk towards a target outcome g, such as the Monte-Carlo return from the real environment, by minimising a mean-squared error loss,
Lk = 1
2 ∥∥Ep [g | s]− Em [gk | s]∥∥2 . ∂lk ∂θ = ( g − gk ) ∂gk ∂θ . (5)
where lk = 12 ∥∥g − gk∥∥2 is the sample loss. We can use the gradient of the sample loss to update parameters, e.g. by stochastic gradient descent. For stochastic models, two independent samples are required for gk and ∂g k
∂θ to get unbiased samples for the gradient of L k.
The λ-predictron combines together many k-step preturns. To update the joint parameters θ, we can uniformly average the losses on the individual preturns gk,
L0:K = 1
2K K∑ k=0 ∥∥Ep [g | s]− Em [gk | s]∥∥2 , ∂l0:K ∂θ = 1 K K∑ k=0 ( g − gk ) ∂gk ∂θ . (6)
Alternative, we could weight each loss by the usage wk of the corresponding preturn, such that the gradient is ∑K k=0w k ( g − gk ) ∂gk
∂θ . The λ-predictron uses an accumulator with additional parameters η that determine the relative weighting of the k-step preturns. These weights are also updated so as to minimise a mean-squared error loss Lλ,
Lλ = 1
2 ∥∥Ep [g | s]− Em [gλ | s]∥∥2 , ∂lλ ∂η = ( g − gλ ) ∂gλ ∂η . (7)
In summary, the joint parameters θ of the state representation f , the modelm, and the value function v are updated to make each of the k-step preturns gk more similar to the target g, and the parameters η of the λ-accumulator are updated to make the aggregate λ-preturn gλ more similar to the target g.
4.1 CONSISTENCY (SEMI-SUPERVISED) LEARNING WITH THE λ-PREDICTRON
Ideally, the predictron (f,m, v) learns preturns that are all equal in expectation to the true value function of the environment, Em [ gk | s ] = Ep [gt | s] = vp(s), in which case the preturns must
be equal in expectation, Em [ g0 | s ] = Em [ g1 | s ] = ... = Em [ gK | s ] . In addition, each k-step
preturn must then be equal in expectation to the λ-preturn, Em [ gk | s ] = Em [ gλ | s ] , for any λ parameters. All these consistency relations between preturns give rise to additional constraints upon the predictron. Specifically, we may adjust the parameters of the predictron to lead to consistent preturns, even in the absence of labelled targets. Concretely, we can adjust each preturn gk towards the λ-preturn gλ; in other words, we can update each individual value estimate towards the best aggregated estimate by minimizing
L = 1
2 K∑ k=0 ∥∥Em [gλ | s]− Em [gk | s]∥∥2 , ∂l ∂θ = K∑ k=0 ( gλ − gk ) ∂gk ∂θ . (8)
Here gλ is considered fixed; the parameters θ are only updated to make gk more similar to gλ, not vice versa. This consistency update does not require any labels g or samples from the environment. As a result, it can be applied to (potentially hypothetical) states that have no associated ‘real’ (e.g. Monte-Carlo) outcome: we update the value estimates to be self-consistent with each other. Note the similarity with the semi-supervised setting, where we may have unlabelled inputs.
5 EXPERIMENTS
We conducted experiments on two domains. The first domain consists of randomly generated 20×20 mazes in which each location either is empty or contains a wall. Two locations in a maze are considered connected if they are both empty and we can reach one from the other by moving horizontally or vertically through adjacent empty cells. The goal is to predict, for each of the locations on the diagonal from top-left to bottom-right of the maze, whether the bottom-right corner is connected to that location, given the entire maze as an input image. Some of these predictions will be straightforward, for instance for locations on the diagonal that contain a wall themselves and for locations close to the bottom right. Many other predictive questions seem to require a simple algorithm, such as some form of a flood fill or search; our hypothesis is that an internal model can learn to emulate such algorithms, where naive approximation may struggle. A few example mazes are shown in Figure 2. Our second domain is a simulation of the game of pool, using four balls and four pockets. The simulator is implemented in the physics engine Mujoco (Todorov et al., 2012). We generate sequences of RGB frames starting from a random arrangement of balls on the table. The goal is to simultaneously learn to predict future events for each of the four balls, given 5 RGB frames as input. These events include: collision with any other ball, collision with any boundary of the table, entering a quadrant (×4, for each quadrant), being located in a quadrant (×4, for each quadrant), and entering a pocket
(×4, for each pocket). Each of these 14×4 events provides a binary pseudo-reward that we combine with 5 different discount factors {0, 0.5, 0.9, 0.98, 1} and predict their cumulative discounted sum over various time spans. This yields a total of 280 general value functions. An example trajectory is shown in Figure 2. In both domains, inputs are presented as minibatches of i.i.d. samples with their regression targets. Additional domain details are provided in Appendix E.
5.1 EXPLORING THE PREDICTRON ARCHITECTURE
Our first set of experiments examines three binary dimensions that differentiate the predictron from standard deep networks. We compare eight predictron variants corresponding to the corners of the cube on the left in Figure 3. The first dimension corresponds to whether or not the predictron architecture utilises the structure of an MRP model. In the MRP case, labelled r, γ, internal rewards and discounts are both learned. In the non-r, γ case, which corresponds to a vanilla hidden-to-hidden neural network module, internal rewards and discounts are ignored by fixing their values to rk = 0 and γk = 1. The second dimension is whether a K-step accumulator or λ-accumulator is used to aggregate over preturns. When a λ-accumulator is used, a λ-preturn is computed as described in Section 3. Otherwise, intermediate preturns are ignored by fixing their values to λk = 1 for k < K. In this case, the overall output of the predictron is simply the maximum-depth preturn gK . The third dimension, labelled usage weighting, defines the loss that is used to update the parameters θ. On this dimension, we consider two options: the preturn losses can either be weighted uniformly (see Equation 6), or the update for each preturn gk can be weighted according to the weightwk that determines how much it is used in the λ-predictron’s overall output. We call the latter loss ‘usage weighted‘. Note that for architectures without a λ-accumulator, wk = 0 for k < K, and wK = 1, thus usage weighting then implies backpropagating only the loss on the final preturn gK . All variants utilise a convolutional core with 2 intermediate hidden layers (see Appendix A); parameters were updated by supervised learning (see Appendix B for more details). Root mean squared prediction errors for each architecture, aggregated over all predictions, are shown in Figure 3. The
r,
w e ig h t sh arin g
skip connections (r, , )-predictron
ConvNet
recurrent ConvNet
ResNet
recurrent ResNet
usage w eighting
0 1M 2M 3M 4M 5M
0.0001
0.001
0.01
R M
S E o
n r
a n d o m
m a ze s (l o g s ca le )
Shared core
deep net deep net with skips (r, γ, λ)-predictron (r, γ, λ)-predictron with skips
0 1M 2M 3M 4M 5M
Unshared cores
0 500K 1M
Updates
0.2
0.3
0.4
R M
S E o
n p
o o l
0 500K 1M
Updates
Figure 4: Comparing predictron to baselines. Aggregated prediction errors on random mazes (top) and pool (bottom) over all predictions for the eight architectures corresponding to the cube on the left. Each line is the median of RMSE over five seeds; shaded regions encompass all seeds. The full (r, γ, λ)-predictron (red), consistently outperformed conventional deep network architectures (black), with and without skips and with and without weight sharing.
top row corresponds to the random mazes and the bottom row to the pool domain. The main conclusion is that learning an MRP model improved performance greatly. The inclusion of λ weights helped as well, especially on pool. Usage weighting further improved performance.
5.2 COMPARING THE PREDICTRON TO OTHER DEEP NETWORKS
Our second set of experiments compares the predictron to feedforward and recurrent deep learning architectures, with and without skip connections. We compare the corners of a new cube, as depicted on the left in Figure 4, based on three different binary dimensions. The first dimension of this second cube is whether we use a predictron, or a (non-λ, non-r, γ) deep network that does not have an internal model and does not output or learn from intermediate predictions. We use the most effective predictron from the previous section, i.e., the (r, γ, λ)-predictron with usage weighting. The second dimension is whether weights are shared between all cores (as in a recurrent network), or whether each core uses separate weights (as in a feedforward network). We note that the nonλ, non-r, γ variants of the predictron then correspond to standard (convolutional) feedforward and (unrolled) recurrent neural networks respectively. The third dimension is whether we include skip connections. This is equivalent to defining the model step to output a change to the current state, ∆s, and then defining sk+1 = h(sk + ∆sk), where h is the non-linear function—in our case a ReLU, h(x) = max(0, x). The deep network with skip connections is a variant of ResNet (He et al., 2015). Root mean squared prediction errors for each architecture are shown in Figure 4. All (r, γ, λ)predictrons (red lines) outperformed the corresponding feedforward or recurrent neural network baselines (black lines) both in the random mazes and in pool. We also investigated the effect of changing the depth of the networks (see Appendix C). The predictron outperformed the corresponding feedforward or recurrent baselines for all depths, with and without skip connections.
5.3 SEMI-SUPERVISED LEARNING BY CONSISTENCY
We now consider how to use the predictron for semi-supervised learning, training the model on a combination of labelled and unlabelled random mazes. Semi-supervised learning is important because a common bottleneck in applying machine learning in the real world is the difficulty of collecting labelled data, whereas often large quantities of unlabelled data exist. We trained a full (r, γ, λ)-predictron by alternating standard supervised updates with consistency updates, obtained by stochastically minimizing the consistency loss (8), on the unlabelled samples. For each supervised update we apply either 0, 1, or 9 consistency updates. Figure 5 shows that
the performance improved monotonically with the number of consistency updates, measured as a function of the number of labelled samples consumed.
5.4 ANALYSIS OF ADAPTIVE DEPTH
In principle, the predictron can adapt its depth to ‘think more’ about some predictions than others, perhaps depending on the complexity of the underlying target. We investigate this by looking at qualitatively different prediction types in pool: ball collisions, rail collisions, pocketing balls, and entering or staying in quadrants. For each prediction type we consider several different time-spans (determined by the real-world discount factors associated with each pseudo-reward). Figure 6 shows distributions of depth for each type of prediction. The ‘depth’ of a predictron is here defined as the effective number of model steps. If the predictron relies fully on the very first value (i.e., λ0 = 0), this counts as 0 steps. If, instead, it learns to place equal weight on all rewards and on the final value, this counts as 16 steps. Concretely, the depth d can be defined recursively as d = d0 where dk = λk(1 + γkdk+1) and dK = 0. Note that even for the same input state, each prediction has a separate depth. The depth distributions exhibit three properties. First, different types of predictions used different depths. Second, depth was correlated with the real-world discount for the first four prediction types. Third, the distributions are not strongly peaked, which implies that the depth can differ per input even for a single real-world discount and prediction type. In a control experiment (not shown) we used a scalar λ shared among all predictions, which reduced performance in all scenarios, indicating that the heterogeneous depth is a valuable form of flexibility.
5.5 VISUALIZING THE PREDICTIONS IN THE POOL DOMAIN
We test the quality of the predictions in the pool domain to evaluate whether they are well-suited to making decisions. For each sampled pool position, we consider a set I of different initial conditions (different angles and velocity of the white ball), and ask which is more likely to lead to pocketing coloured balls. For each initial condition s ∈ I , we apply the (r, γ, λ)-predictron (shared cores, 16 model steps, no skip connections) to obtain predictions gλ. We sum the predictions that correspond
to pocketing any ball except the white ball, and to real-world discounts γ = 0.98 and γ = 1. We select the condition s∗ that maximises this sum. We then roll forward the pool simulator from s∗ and log the number of pocketing events. Figure 2 shows a sampled rollout, using the predictron to pick s∗. When providing the choice of 128 angles and two velocities for initial conditions (|I| = 256), this procedure resulted in pocketing 27 coloured balls in 50 episodes. Using the same procedure with an equally deep convolutional network only resulted in 10 pocketing events. These results suggest that the lower loss of the learned (r, γ, λ)-predictron translated into meaningful improvements when informing decisions. A video of the rollouts selected by the predictron is available here: https://youtu.be/BeaLdaN2C3Q.
6 RELATED WORK
Lee et al. (2015) introduced a neural network architecture where classifications branch off intermediate hidden layers. An important difference with respect to the λ-predictron, is that the weights are hand-tuned as hyper-parameters, whereas in the predictron the λ weights are learnt and, more importantly, conditional on the input. Another difference is that the loss on the auxiliary classifications is used to speed up learning, but the classifications themselves are not combined into an aggregate prediction; the output of the model itself is the deepest prediction. Graves (2016) introduced an architecture with adaptive computation time (ACT), with a discrete (but differentiable) decision on when to halt, and aggregating over the outputs at each pondering step. This is related to our λ weights, but obtains depth in a different way; one notable difference is that the λ-predictron can choose different pondering depths for each of its predictions. Value iteration networks (VINs) (Tamar et al., 2016) also learn value functions end-to-end using an internal model, similar to the (non-λ) predictron. However, VINs plan via convolutional operations over the full input state space; whereas the predictron plans via imagined trajectories through an abstract state space. This may allow the predictron architecture to scale much more effectively in domains that do not have a natural two-dimensional encoding of the state space. The notion of learning about many predictions of the future relates to work on predictive state representations (PSRs; Littman et al., 2001), general value functions (GVFs; Sutton et al., 2011), and nexting (Modayil et al., 2012). Such predictions have been shown to be useful as representations (Schaul and Ring, 2013) and for transfer (Schaul et al., 2015). So far, however, none of these have been considered for learning abstract models. Schmidhuber (2015) discusses learning abstract models, but maintains separate losses for the model and a controller, and suggests training the model unsupervised to compactly encode the entire history of observations, through predictive coding. The predictron’s abstract model is instead trained endto-end to obtain accurate values.
7 CONCLUSION
The predictron is a single differentiable architecture that rolls forward an internal model to estimate external values. This internal model may be given both the structure and the semantics of traditional reinforcement learning models. But unlike most approaches to model-based reinforcement learning, the model is fully abstract: it need not correspond to the real environment in any human understandable fashion, so long as its rolled-forward “plans” accurately predict outcomes in the true environment. The predictron may be viewed as a novel network architecture that incorporates several separable ideas. First, the predictron outputs a value by accumulating rewards over a series of internal planning steps. Second, each forward pass of the predictron outputs values at multiple planning depths. Third, these values may be combined together, also within a single forward pass, to output an overall ensemble value. Finally, the different values output by the predictron may be encouraged to be self-consistent with each other, to provide an additional signal during learning. Our experiments demonstrate that these differences result in more accurate predictions of value, in reinforcement learning environments, than more conventional network architectures. We have focused on value prediction tasks in uncontrolled environments. However, these ideas may transfer to the control setting, for example by using the predictron as a Q-network (Mnih et al., 2015). Even more intriguing is the possibility of learning an internal MDP with abstract internal actions, rather than the MRP considered in this paper. We aim to explore these ideas in future work.
A ARCHITECTURE
The state representation f is a two-layer convolutional neural network (LeCun et al., 1998). There is a core c, again based on convolutions, that combines both MRP model and λ-network into a single repeatable module, such that sk+1, rk+1, γk+1,λk = c(sk). This core is deterministic, and is duplicated K times in the predictron with shared weights. (The predictron with unshared weights hasK distinct cores.) Finally, the value network v is a fully connected neural network that computes vk = v(sk). Concretely, the core (Figure 7) consists first of a convolutional layer that maps into an intermediate (hidden) layer. From this layer, another two convolutions compute the next abstract state of the predictron. Additionally, this same hidden layer is flattened and fed into three separate networks, with two fully connected layers each. The outputs of these three networks represent the internal rewards, discounts, and lambdas. A similar small network also hangs off the internal states, in addition to the core, and computes the values. All convolutions use 3×3 filters and a stride of one, and use padding to retain the size of the feature maps. All feature maps have 32 channels. The hidden layers within the MLPs have 32 hidden units. In Figure 7 the convolutional layers are schematically drawn with three channels, flattening is represented by curly brakets, while the arrows represent the small multi-layer perceptrons which compute values, rewards, discounts and lambdas. We allow up to 16 model steps in our experiments, resulting in 52-layer deep networks—two convolutional layers for the state representations, 3× 16 = 48 convolutional layers for the core steps, and two fully-connected layers for the values on top of the final state. Between each two layers we apply batch normalization (Ioffe and Szegedy, 2015) followed by a ReLU non-linearity (Glorot et al., 2011). The value and reward networks end with a linear layer, whereas the discount and λ-networks additionally add a sigmoid non-linearity to ensure that these quantities are in [0, 1].
B TRAINING
All experiments used the supervised (Monte-Carlo) update described in Section 4 except for the semi-supervised experiment which used the consistency update described in Section 4.1. We update all parameters by applying the Adam optimiser (Kingma and Ba, 2015) to stochastic gradients of the corresponding loss functions. Each return is normalised by dividing it by its standard deviation (as measured, prior to the experiment, on a set of 20,000 episodes). In all experiments, the learning rate was 0.001, and the other parameters of the Adam optimiser were β1 = 0.9, β2 = 0.999, and = 10−8. We used mini-batches of 100 samples.
C COMPARING ARCHITECTURES OF DIFFERENT DEPTHS
We investigated the effect of changing the depth of the networks, with and without skip connections. Figure 8 in shows that skip connections (dashed lines) make the conventional architectures
(black/grey lines) more robust to the depth (i.e., the black/grey dashed lines almost overlap, especially on pool), and that the predictron outperforms the corresponding feedforward or recurrent baselines for all depths, with and without skips.
D CAPACITY COMPARISONS
In this section, we present some additional experiments comparing the predictron to more conventional deep networks. The purposes of these experiments are 1) to show that the conclusions obtained above do not depend on the precise architecture used, and 2) to show that the structure of the network—whether we use a predictron or not—is more important than the raw number of parameters. Specifically, we again consider the same 20 by 20 random mazes, and the pool task described in the main text. As described in Section A, for the results in the paper we used an encoder that preserved the size of the input plans, 20 × 20 for the mazes and 28 × 28 for pool. Each convolution had 32 channels and therefore the abstract states were 20 × 20 × 32 for the mazes and 28 × 28 × 32 for pool. We now consider a different architecture, where we no longer pad the convolutions used in the encoder. For the mazes, we still use two layers of 3 × 3 stride-1 convolutions, which means the planes reduce in size to 16× 16. This means that the abstract states are about one third smaller. For pool, we use three 5×5 stride-1 convolutions, which bring us from 28×28 down to 16×16 as well. So, the abstract states are now of equal size for both experiments. For pool, this is approximately a two-thirds reduction, which helps reduce the compute needed to run the model. Most of the parameters in the predictron are in the fully connected layers. Previously, the first fully connected layer for each of the internal values, rewards, discounts, and λ-parameters would take a flattened abstract state, and then go into 32 hidden nodes. This means the number of parameters in this layer were 20× 20× 32× 32 = 409, 600 for the mazes and 28× 28× 32× 32 = 802, 816 for pool. The predictron with shared core would have four of these layers, one for each of the internal values, rewards, discounts, and λs, compared to one for the deep network which only has values. We change this in two ways. First, we add a 1× 1 convolution with a stride of 1 and 8 channels before the first fully connected layer for each of these outputs. This reduces the number of channels, and therefore the number of parameters in the subsequent fully-connected layer, by one fourth. Second, we tested three different numbers of hidden nodes: 32, 128, or 512.
The deep network with 128 hidden nodes for its values has the exact same number of parameters as the (r, γ, λ)-predictron with 32 hidden nodes for each of its outputs. Before, the deep network had fewer parameters, because we kept this number fixed at 32 across experiments. This opens the question of whether the improved performance of the predictron was not just an artifact of having more parameters. We tested this hypothesis, and the results are shown in Figure 9. Figure 9 shows that in each setting—on the mazes and pool, and with or without shared cores— both. The predictrons always performed better than all the deep networks. This includes the 32 node predictron (darkest red) compared to the 512 node deep network (lightest blue), even though the latter has approximately 4 times as many parameters (1.27M vs 4.85M). This means that the number of parameters mattered less than whether or not we use a predictron.
E ADDITIONAL DOMAIN DETAILS
We now provide some additional details of domains.
E.1 POOL
To generate sequences in the Pool domain, the initial locations of 4 balls of different colours are sampled at random. The white ball is the only one moving initially. Its velocity has a norm sampled uniformly between 7 and 14. The initial angle is sampled uniformly in the range (0, 2π). From the initial condition, the Mujoco simulation is run forward until all balls have stopped moving; sequences that last more than 151 frames are rejected, and a new one is generated as replacement. Each frame is rendered by Mujoco as a 280x280 RGB image, and subsequently downsampled through bilinear interpolation to a 28x28 RGB input (see Figure 10 for an example). Since the 280 signals described in Section 6.1 as targets for the Pool experiments have very different levels of sparsity, resulting in values
with very different scales, we have normalised the pseudo returns. The normalization procedure consisted in dividing all targets by their standard deviation, as empirically measured across an initial set of 20,000 sequences.
E.2 RANDOM MAZES
To generate mazes we first determine, with a stochastic line search, a number of walls so that the topleft corner is connected to the bottom-right corner (both always forced to be empty) in approximately 50% of the mazes. We then shuffle the walls uniformly randomly. For 20 by 20 mazes this means 70% of locations are empty and 30% contain walls. More than a googol different such 20-by-20 mazes exist (as ( 398 120 ) > 10100). | 1. What is the main contribution of the paper regarding planning problems and deep network architectures?
2. What are the strengths and weaknesses of the proposed approach, particularly in its connection to past work and necessity of the recurrent core?
3. How does the reviewer assess the experiments and their setup, and what changes would they suggest to improve the validation of the learned models for planning? | Review | Review
The paper proposes an approach to learning models that are good for planning problems, using deep netowork architectures. The key idea is to ensure that models are self-consistent and accurately predict the future. The problem of learning good planning models (as opposed to simply good predictive models is really crucial and attempts so far have failed. This paper is conceptually interesting and provides a valuable perspective on how to achieve this goal. Its incorporation of key RL concepts (like discounting and eligibility traces) and the flexibility to learn these is very appealing. Hence, I think it should be accepted. This being said, I think the paper does not quite live up to its claims. Here are some aspects that need to be addressed (in order of importance):
1. Relationship to past work: the proposed representation seems essentially a non-linear implementation of the Horde architecture. It is also very similar in spirit to predictive state representations. Yet these connections are almost not discussed at all. The related work paragraph is very brief and needs expansion to situate the work in the context of other predictive modelling attempts that both were designed to be used for planning and (in the case of PSRs) were in fact successsfully used in planning tasks. Some newer work on learning action-conditional models in Atari games are also not discussed. Situating the paper better in the context of existing model learning would also help understand easier both the motivations and the novel contributions of the work (otherwise, the reader is left to try and elucidate this for themselves, and may come to the wrong conclusion).
2. The paper needs to provide some insight about the necessity of the recurrent core of the architecture. The ideas are presented nicely in general fashion, yet the proposed impolementation is quite specific and "bulky" (very high number of parameters). Is this really necessary in all tasks? Can one implement the basic ideas outside of the particular architecture proposed? Can we use feedforward approximations or is the recurrent part somehow necessary? At the very least the paper should expand the discussion on this topic, if not provide some empirical evidence.
3. The experiments are very restricted in their setup: iid data drawn from fixed distributions, correct targets. So, the proposed approach seems like an overkill for these particular tasks. There is an indirect attempt to provide evidence the learned models would be useful for planning, but no direct measurement to support this'd claim (no use of the models in planning). Compared to the original Horde paper, fewer predictions are learned, and these are more similar to each other. While I sympathize with the desire to go in steps, I think the paper stops short of where it should. At the very least, doing prediction in the context of an actual RL prediction task, with non-iid inputs, should be included in the paper. This should only require minor modifications to the experiments (same task, just different data). Ideally, in the case of the mazes, the learned models should be used in some form of simplified planning to learn paths. This would align the experiments much better with the claims in the presentation of the architecture. |
ICLR | Title
The Predictron: End-To-End Learning and Planning
Abstract
One of the key challenges of artificial intelligence is to learn models that are effective in the context of planning. In this document we introduce the predictron architecture. The predictron consists of a fully abstract model, represented by a Markov reward process, that can be rolled forward multiple “imagined” planning steps. Each forward pass of the predictron accumulates internal rewards and values over multiple planning depths. The predictron is trained end-to-end so as to make these accumulated values accurately approximate the true value function. We applied the predictron to procedurally generated random mazes and a simulator for the game of pool. The predictron yielded significantly more accurate predictions than conventional deep neural network architectures.
1 INTRODUCTION
The central idea of model-based reinforcement learning is to decompose the RL problem into two subproblems: learning a model of the environment, and then planning with this model. The model is typically represented by a Markov reward process (MRP) or decision process (MDP). The planning component uses this model to evaluate and select among possible strategies. This is typically achieved by rolling forward the model to construct a value function that estimates cumulative reward. In prior work, the model is trained essentially independently of its use within the planner. As a result, the model is not well-matched with the overall objective of the agent. Prior deep reinforcement learning methods have successfully constructed models that can unroll near pixel-perfect reconstructions (Oh et al., 2015; Chiappa et al., 2016); but are yet to surpass state-of-the-art modelfree methods in challenging RL domains with raw inputs (e.g., Mnih et al., 2015; 2016; Lillicrap et al., 2016). In this paper we introduce a new architecture, which we call the predictron, that integrates learning and planning into one end-to-end training procedure. At every step, a model is applied to an internal state, to produce a next state, reward, discount, and value estimate. This model is completely abstract and its only goal is to facilitate accurate value prediction. For example, to plan effectively in a game, an agent must be able to predict the score. If our model makes accurate predictions, then an optimal plan with respect to our model will also be an optimal plan for the underlying game – even if that model uses a different state space (e.g., an abstract representation of enemy positions, ignoring their shapes and colours), action space (e.g., a high-level action to move away from an enemy), rewards (e.g., a single abstract step could have a higher value than any real reward), or even timestep (e.g., a single abstract step could “jump” the agent to the end of a corridor). All we require is that trajectories through the abstract model produce scores that are consistent with trajectories through the real environment. This is achieved by training the predictron end-to-end, so as to make its value estimates as accurate as possible. An ideal model could generalise to many different prediction tasks, rather than overfitting to a single task; and could learn from a rich variety of feedback signals, not just a single extrinsic reward. We therefore train the predictron to predict a host of different value functions for a variety of pseudoreward functions and discount factors. These pseudo-rewards can encode any event or aspect of the environment that the agent may care about, e.g., staying alive or reaching the next room. We focus upon the prediction task: estimating value functions in MRP environments with uncontrolled dynamics. In this case, the predictron can be implemented as a deep neural network with an
*Primary contributors
MRP as a recurrent core. The predictron unrolls this core multiple steps and accumulates rewards into an overall estimate of value. We applied the predictron to procedurally generated random mazes, and a simulated pool domain, directly from pixel inputs. In both cases, the predictron significantly outperformed model-free algorithms with conventional deep network architectures; and was much more robust to architectural choices such as depth.
2 BACKGROUND
We consider environments defined by an MRP with states s ∈ S. The MRP is defined by a function, s′, r, γ = p(s, α), where s′ is the next state, r is the reward, and γ is the discount factor, which can for instance represent the non-termination probability for this transition. The process may be stochastic, given IID noise α. The return of an MRP is the cumulative discounted reward over a single trajectory, gt = rt+1 + γt+1rt+2 +γt+1γt+2rt+3 + ... , where γt can vary per time-step. We consider a generalisation of the MRP setting that includes vector-valued rewards r, diagonal-matrix discounts γ , and vector-valued returns g; definitions are otherwise identical to the above. We use this bold font notation to closely match the more familiar scalar MRP case; the majority of the paper can be comfortably understood by reading all rewards as scalars, and all discount factors as scalar and constant, i.e., γt = γ. The value function of an MRP p is the expected return from state s, vp(s) = Ep [gt | st = s]. In the vector case, these are known as general value functions (Sutton et al., 2011). We will say that a (general) value function v(·) is consistent with environment p if and only if v = vp which satisfies the following Bellman equation (Bellman, 1957),
vp(s) = Ep [r + γvp(s′) | s] . (1)
In model-based reinforcement learning (Sutton and Barto, 1998), an approximation m ≈ p to the environment is learned. In the uncontrolled setting this model is normally an MRP s′, r, γ = m(s, β) that maps from state s to subsequent state s′ and additionally outputs rewards r and discounts γ ; the model may be stochastic given an IID source of noise β. A (general) value function vm(·) is consistent with model m (or valid, (Sutton, 1995)), if and only if it satisfies a Bellman equation vm(s) = Em [r + γvm(s′) | s] with respect to model m. Conventionally, model-based RL methods focus on finding a value function v that is consistent with a separately learned model m.
3 PREDICTRON ARCHITECTURE
The predictron is composed of four main components. First, a state representation s = f(s) that encodes raw input s (this could be a history of observations, in the partially observed setting, for example when f is a recurrent network) into an internal (abstract, hidden) state s. Second, a model s′, r, γ = m(s, β) that maps from internal state s to subsequent internal state s′, internal rewards r, and internal discounts γ . Third, a value function v that outputs internal values v = v(s) representing the future, internal return from internal state s onwards. The predictron is applied by unrolling its model m multiple “planning” steps to produce internal rewards, discounts and values. We use superscripts •k to indicate internal steps of the model (which have no necessary connection to time steps •t of the environment). Finally, these internal rewards, discounts and values are combined together by an accumulator into an overall estimate of value g. The whole predictron, from input state s to output g, may be viewed as a value function approximator for external targets (i.e. the returns in the real environment). We consider both k-step and λ-weighted accumulators. The k-step predictron rolls its internal model forward k steps. Specifically, the k-step predictron return gk (henceforth abbreviated as preturn) is the internal return obtained by accumulating k model steps, plus a final value vk from the kth step,
gk = r1 + γ1(r2 + γ2(. . . (rk−1 + γk−1(rk + γkvk)) . . .)). (2)
The 0-step preturn is simply the first value g0 = v0. The 1-step preturn is g1 = r1 + γ1v1, and so on (see Fig. 1a). The λ-predictron combines together many k-step preturns. Specifically, it computes a diagonal weight matrix λk from each internal state sk. The accumulator uses weights λ0, ...,λK to aggregate
over k-step preturns g0, ...,gK and output a combined value that we call the λ-preturn gλ,
gλ = K∑ k=0 wkgk where wk = (1− λk) ∏k−1 j=0 λ j if k < K ∏K−1 j=0 λ j otherwise. (3)
where 1 is the identity matrix. This λ-preturn is analogous to the λ-return in the forward-view TD(λ) algorithm (Sutton, 1988; Sutton and Barto, 1998). It may also be computed by a backward accumulation through intermediate steps gk,λ,
gk,λ = (1− λk)vk + λk ( rk+1 + γk+1gk+1,λ ) , (4)
where gK,λ = vK , and then using gλ = g0,λ. Computation in the λ-predictron operates in a sweep, iterating first through the model from k = 0 . . .K and then back through the accumulator from k = K . . . 0 in a single “forward” pass of the network (see Figure 1b). Each λk weight acts as a gate on the computation of the λ-preturn: a value of λk = 0 will truncate the λ-preturn at layer k, while a value of λk = 1 will utilise deeper layers based on additional steps of the model m; the final weight is always λK = 0. The individual λk weights may depend on the corresponding abstract state sk and can differ per prediction. This enables the predictron to compute to an adaptive depth (Graves, 2016) depending on the internal state and learning dynamics of the network.
4 PREDICTRON LEARNING UPDATES
We first consider updates that optimise the joint parameters θ of the state representation, model, and value function. We begin with the k-step predictron. We update the k-step predictron gk towards a target outcome g, such as the Monte-Carlo return from the real environment, by minimising a mean-squared error loss,
Lk = 1
2 ∥∥Ep [g | s]− Em [gk | s]∥∥2 . ∂lk ∂θ = ( g − gk ) ∂gk ∂θ . (5)
where lk = 12 ∥∥g − gk∥∥2 is the sample loss. We can use the gradient of the sample loss to update parameters, e.g. by stochastic gradient descent. For stochastic models, two independent samples are required for gk and ∂g k
∂θ to get unbiased samples for the gradient of L k.
The λ-predictron combines together many k-step preturns. To update the joint parameters θ, we can uniformly average the losses on the individual preturns gk,
L0:K = 1
2K K∑ k=0 ∥∥Ep [g | s]− Em [gk | s]∥∥2 , ∂l0:K ∂θ = 1 K K∑ k=0 ( g − gk ) ∂gk ∂θ . (6)
Alternative, we could weight each loss by the usage wk of the corresponding preturn, such that the gradient is ∑K k=0w k ( g − gk ) ∂gk
∂θ . The λ-predictron uses an accumulator with additional parameters η that determine the relative weighting of the k-step preturns. These weights are also updated so as to minimise a mean-squared error loss Lλ,
Lλ = 1
2 ∥∥Ep [g | s]− Em [gλ | s]∥∥2 , ∂lλ ∂η = ( g − gλ ) ∂gλ ∂η . (7)
In summary, the joint parameters θ of the state representation f , the modelm, and the value function v are updated to make each of the k-step preturns gk more similar to the target g, and the parameters η of the λ-accumulator are updated to make the aggregate λ-preturn gλ more similar to the target g.
4.1 CONSISTENCY (SEMI-SUPERVISED) LEARNING WITH THE λ-PREDICTRON
Ideally, the predictron (f,m, v) learns preturns that are all equal in expectation to the true value function of the environment, Em [ gk | s ] = Ep [gt | s] = vp(s), in which case the preturns must
be equal in expectation, Em [ g0 | s ] = Em [ g1 | s ] = ... = Em [ gK | s ] . In addition, each k-step
preturn must then be equal in expectation to the λ-preturn, Em [ gk | s ] = Em [ gλ | s ] , for any λ parameters. All these consistency relations between preturns give rise to additional constraints upon the predictron. Specifically, we may adjust the parameters of the predictron to lead to consistent preturns, even in the absence of labelled targets. Concretely, we can adjust each preturn gk towards the λ-preturn gλ; in other words, we can update each individual value estimate towards the best aggregated estimate by minimizing
L = 1
2 K∑ k=0 ∥∥Em [gλ | s]− Em [gk | s]∥∥2 , ∂l ∂θ = K∑ k=0 ( gλ − gk ) ∂gk ∂θ . (8)
Here gλ is considered fixed; the parameters θ are only updated to make gk more similar to gλ, not vice versa. This consistency update does not require any labels g or samples from the environment. As a result, it can be applied to (potentially hypothetical) states that have no associated ‘real’ (e.g. Monte-Carlo) outcome: we update the value estimates to be self-consistent with each other. Note the similarity with the semi-supervised setting, where we may have unlabelled inputs.
5 EXPERIMENTS
We conducted experiments on two domains. The first domain consists of randomly generated 20×20 mazes in which each location either is empty or contains a wall. Two locations in a maze are considered connected if they are both empty and we can reach one from the other by moving horizontally or vertically through adjacent empty cells. The goal is to predict, for each of the locations on the diagonal from top-left to bottom-right of the maze, whether the bottom-right corner is connected to that location, given the entire maze as an input image. Some of these predictions will be straightforward, for instance for locations on the diagonal that contain a wall themselves and for locations close to the bottom right. Many other predictive questions seem to require a simple algorithm, such as some form of a flood fill or search; our hypothesis is that an internal model can learn to emulate such algorithms, where naive approximation may struggle. A few example mazes are shown in Figure 2. Our second domain is a simulation of the game of pool, using four balls and four pockets. The simulator is implemented in the physics engine Mujoco (Todorov et al., 2012). We generate sequences of RGB frames starting from a random arrangement of balls on the table. The goal is to simultaneously learn to predict future events for each of the four balls, given 5 RGB frames as input. These events include: collision with any other ball, collision with any boundary of the table, entering a quadrant (×4, for each quadrant), being located in a quadrant (×4, for each quadrant), and entering a pocket
(×4, for each pocket). Each of these 14×4 events provides a binary pseudo-reward that we combine with 5 different discount factors {0, 0.5, 0.9, 0.98, 1} and predict their cumulative discounted sum over various time spans. This yields a total of 280 general value functions. An example trajectory is shown in Figure 2. In both domains, inputs are presented as minibatches of i.i.d. samples with their regression targets. Additional domain details are provided in Appendix E.
5.1 EXPLORING THE PREDICTRON ARCHITECTURE
Our first set of experiments examines three binary dimensions that differentiate the predictron from standard deep networks. We compare eight predictron variants corresponding to the corners of the cube on the left in Figure 3. The first dimension corresponds to whether or not the predictron architecture utilises the structure of an MRP model. In the MRP case, labelled r, γ, internal rewards and discounts are both learned. In the non-r, γ case, which corresponds to a vanilla hidden-to-hidden neural network module, internal rewards and discounts are ignored by fixing their values to rk = 0 and γk = 1. The second dimension is whether a K-step accumulator or λ-accumulator is used to aggregate over preturns. When a λ-accumulator is used, a λ-preturn is computed as described in Section 3. Otherwise, intermediate preturns are ignored by fixing their values to λk = 1 for k < K. In this case, the overall output of the predictron is simply the maximum-depth preturn gK . The third dimension, labelled usage weighting, defines the loss that is used to update the parameters θ. On this dimension, we consider two options: the preturn losses can either be weighted uniformly (see Equation 6), or the update for each preturn gk can be weighted according to the weightwk that determines how much it is used in the λ-predictron’s overall output. We call the latter loss ‘usage weighted‘. Note that for architectures without a λ-accumulator, wk = 0 for k < K, and wK = 1, thus usage weighting then implies backpropagating only the loss on the final preturn gK . All variants utilise a convolutional core with 2 intermediate hidden layers (see Appendix A); parameters were updated by supervised learning (see Appendix B for more details). Root mean squared prediction errors for each architecture, aggregated over all predictions, are shown in Figure 3. The
r,
w e ig h t sh arin g
skip connections (r, , )-predictron
ConvNet
recurrent ConvNet
ResNet
recurrent ResNet
usage w eighting
0 1M 2M 3M 4M 5M
0.0001
0.001
0.01
R M
S E o
n r
a n d o m
m a ze s (l o g s ca le )
Shared core
deep net deep net with skips (r, γ, λ)-predictron (r, γ, λ)-predictron with skips
0 1M 2M 3M 4M 5M
Unshared cores
0 500K 1M
Updates
0.2
0.3
0.4
R M
S E o
n p
o o l
0 500K 1M
Updates
Figure 4: Comparing predictron to baselines. Aggregated prediction errors on random mazes (top) and pool (bottom) over all predictions for the eight architectures corresponding to the cube on the left. Each line is the median of RMSE over five seeds; shaded regions encompass all seeds. The full (r, γ, λ)-predictron (red), consistently outperformed conventional deep network architectures (black), with and without skips and with and without weight sharing.
top row corresponds to the random mazes and the bottom row to the pool domain. The main conclusion is that learning an MRP model improved performance greatly. The inclusion of λ weights helped as well, especially on pool. Usage weighting further improved performance.
5.2 COMPARING THE PREDICTRON TO OTHER DEEP NETWORKS
Our second set of experiments compares the predictron to feedforward and recurrent deep learning architectures, with and without skip connections. We compare the corners of a new cube, as depicted on the left in Figure 4, based on three different binary dimensions. The first dimension of this second cube is whether we use a predictron, or a (non-λ, non-r, γ) deep network that does not have an internal model and does not output or learn from intermediate predictions. We use the most effective predictron from the previous section, i.e., the (r, γ, λ)-predictron with usage weighting. The second dimension is whether weights are shared between all cores (as in a recurrent network), or whether each core uses separate weights (as in a feedforward network). We note that the nonλ, non-r, γ variants of the predictron then correspond to standard (convolutional) feedforward and (unrolled) recurrent neural networks respectively. The third dimension is whether we include skip connections. This is equivalent to defining the model step to output a change to the current state, ∆s, and then defining sk+1 = h(sk + ∆sk), where h is the non-linear function—in our case a ReLU, h(x) = max(0, x). The deep network with skip connections is a variant of ResNet (He et al., 2015). Root mean squared prediction errors for each architecture are shown in Figure 4. All (r, γ, λ)predictrons (red lines) outperformed the corresponding feedforward or recurrent neural network baselines (black lines) both in the random mazes and in pool. We also investigated the effect of changing the depth of the networks (see Appendix C). The predictron outperformed the corresponding feedforward or recurrent baselines for all depths, with and without skip connections.
5.3 SEMI-SUPERVISED LEARNING BY CONSISTENCY
We now consider how to use the predictron for semi-supervised learning, training the model on a combination of labelled and unlabelled random mazes. Semi-supervised learning is important because a common bottleneck in applying machine learning in the real world is the difficulty of collecting labelled data, whereas often large quantities of unlabelled data exist. We trained a full (r, γ, λ)-predictron by alternating standard supervised updates with consistency updates, obtained by stochastically minimizing the consistency loss (8), on the unlabelled samples. For each supervised update we apply either 0, 1, or 9 consistency updates. Figure 5 shows that
the performance improved monotonically with the number of consistency updates, measured as a function of the number of labelled samples consumed.
5.4 ANALYSIS OF ADAPTIVE DEPTH
In principle, the predictron can adapt its depth to ‘think more’ about some predictions than others, perhaps depending on the complexity of the underlying target. We investigate this by looking at qualitatively different prediction types in pool: ball collisions, rail collisions, pocketing balls, and entering or staying in quadrants. For each prediction type we consider several different time-spans (determined by the real-world discount factors associated with each pseudo-reward). Figure 6 shows distributions of depth for each type of prediction. The ‘depth’ of a predictron is here defined as the effective number of model steps. If the predictron relies fully on the very first value (i.e., λ0 = 0), this counts as 0 steps. If, instead, it learns to place equal weight on all rewards and on the final value, this counts as 16 steps. Concretely, the depth d can be defined recursively as d = d0 where dk = λk(1 + γkdk+1) and dK = 0. Note that even for the same input state, each prediction has a separate depth. The depth distributions exhibit three properties. First, different types of predictions used different depths. Second, depth was correlated with the real-world discount for the first four prediction types. Third, the distributions are not strongly peaked, which implies that the depth can differ per input even for a single real-world discount and prediction type. In a control experiment (not shown) we used a scalar λ shared among all predictions, which reduced performance in all scenarios, indicating that the heterogeneous depth is a valuable form of flexibility.
5.5 VISUALIZING THE PREDICTIONS IN THE POOL DOMAIN
We test the quality of the predictions in the pool domain to evaluate whether they are well-suited to making decisions. For each sampled pool position, we consider a set I of different initial conditions (different angles and velocity of the white ball), and ask which is more likely to lead to pocketing coloured balls. For each initial condition s ∈ I , we apply the (r, γ, λ)-predictron (shared cores, 16 model steps, no skip connections) to obtain predictions gλ. We sum the predictions that correspond
to pocketing any ball except the white ball, and to real-world discounts γ = 0.98 and γ = 1. We select the condition s∗ that maximises this sum. We then roll forward the pool simulator from s∗ and log the number of pocketing events. Figure 2 shows a sampled rollout, using the predictron to pick s∗. When providing the choice of 128 angles and two velocities for initial conditions (|I| = 256), this procedure resulted in pocketing 27 coloured balls in 50 episodes. Using the same procedure with an equally deep convolutional network only resulted in 10 pocketing events. These results suggest that the lower loss of the learned (r, γ, λ)-predictron translated into meaningful improvements when informing decisions. A video of the rollouts selected by the predictron is available here: https://youtu.be/BeaLdaN2C3Q.
6 RELATED WORK
Lee et al. (2015) introduced a neural network architecture where classifications branch off intermediate hidden layers. An important difference with respect to the λ-predictron, is that the weights are hand-tuned as hyper-parameters, whereas in the predictron the λ weights are learnt and, more importantly, conditional on the input. Another difference is that the loss on the auxiliary classifications is used to speed up learning, but the classifications themselves are not combined into an aggregate prediction; the output of the model itself is the deepest prediction. Graves (2016) introduced an architecture with adaptive computation time (ACT), with a discrete (but differentiable) decision on when to halt, and aggregating over the outputs at each pondering step. This is related to our λ weights, but obtains depth in a different way; one notable difference is that the λ-predictron can choose different pondering depths for each of its predictions. Value iteration networks (VINs) (Tamar et al., 2016) also learn value functions end-to-end using an internal model, similar to the (non-λ) predictron. However, VINs plan via convolutional operations over the full input state space; whereas the predictron plans via imagined trajectories through an abstract state space. This may allow the predictron architecture to scale much more effectively in domains that do not have a natural two-dimensional encoding of the state space. The notion of learning about many predictions of the future relates to work on predictive state representations (PSRs; Littman et al., 2001), general value functions (GVFs; Sutton et al., 2011), and nexting (Modayil et al., 2012). Such predictions have been shown to be useful as representations (Schaul and Ring, 2013) and for transfer (Schaul et al., 2015). So far, however, none of these have been considered for learning abstract models. Schmidhuber (2015) discusses learning abstract models, but maintains separate losses for the model and a controller, and suggests training the model unsupervised to compactly encode the entire history of observations, through predictive coding. The predictron’s abstract model is instead trained endto-end to obtain accurate values.
7 CONCLUSION
The predictron is a single differentiable architecture that rolls forward an internal model to estimate external values. This internal model may be given both the structure and the semantics of traditional reinforcement learning models. But unlike most approaches to model-based reinforcement learning, the model is fully abstract: it need not correspond to the real environment in any human understandable fashion, so long as its rolled-forward “plans” accurately predict outcomes in the true environment. The predictron may be viewed as a novel network architecture that incorporates several separable ideas. First, the predictron outputs a value by accumulating rewards over a series of internal planning steps. Second, each forward pass of the predictron outputs values at multiple planning depths. Third, these values may be combined together, also within a single forward pass, to output an overall ensemble value. Finally, the different values output by the predictron may be encouraged to be self-consistent with each other, to provide an additional signal during learning. Our experiments demonstrate that these differences result in more accurate predictions of value, in reinforcement learning environments, than more conventional network architectures. We have focused on value prediction tasks in uncontrolled environments. However, these ideas may transfer to the control setting, for example by using the predictron as a Q-network (Mnih et al., 2015). Even more intriguing is the possibility of learning an internal MDP with abstract internal actions, rather than the MRP considered in this paper. We aim to explore these ideas in future work.
A ARCHITECTURE
The state representation f is a two-layer convolutional neural network (LeCun et al., 1998). There is a core c, again based on convolutions, that combines both MRP model and λ-network into a single repeatable module, such that sk+1, rk+1, γk+1,λk = c(sk). This core is deterministic, and is duplicated K times in the predictron with shared weights. (The predictron with unshared weights hasK distinct cores.) Finally, the value network v is a fully connected neural network that computes vk = v(sk). Concretely, the core (Figure 7) consists first of a convolutional layer that maps into an intermediate (hidden) layer. From this layer, another two convolutions compute the next abstract state of the predictron. Additionally, this same hidden layer is flattened and fed into three separate networks, with two fully connected layers each. The outputs of these three networks represent the internal rewards, discounts, and lambdas. A similar small network also hangs off the internal states, in addition to the core, and computes the values. All convolutions use 3×3 filters and a stride of one, and use padding to retain the size of the feature maps. All feature maps have 32 channels. The hidden layers within the MLPs have 32 hidden units. In Figure 7 the convolutional layers are schematically drawn with three channels, flattening is represented by curly brakets, while the arrows represent the small multi-layer perceptrons which compute values, rewards, discounts and lambdas. We allow up to 16 model steps in our experiments, resulting in 52-layer deep networks—two convolutional layers for the state representations, 3× 16 = 48 convolutional layers for the core steps, and two fully-connected layers for the values on top of the final state. Between each two layers we apply batch normalization (Ioffe and Szegedy, 2015) followed by a ReLU non-linearity (Glorot et al., 2011). The value and reward networks end with a linear layer, whereas the discount and λ-networks additionally add a sigmoid non-linearity to ensure that these quantities are in [0, 1].
B TRAINING
All experiments used the supervised (Monte-Carlo) update described in Section 4 except for the semi-supervised experiment which used the consistency update described in Section 4.1. We update all parameters by applying the Adam optimiser (Kingma and Ba, 2015) to stochastic gradients of the corresponding loss functions. Each return is normalised by dividing it by its standard deviation (as measured, prior to the experiment, on a set of 20,000 episodes). In all experiments, the learning rate was 0.001, and the other parameters of the Adam optimiser were β1 = 0.9, β2 = 0.999, and = 10−8. We used mini-batches of 100 samples.
C COMPARING ARCHITECTURES OF DIFFERENT DEPTHS
We investigated the effect of changing the depth of the networks, with and without skip connections. Figure 8 in shows that skip connections (dashed lines) make the conventional architectures
(black/grey lines) more robust to the depth (i.e., the black/grey dashed lines almost overlap, especially on pool), and that the predictron outperforms the corresponding feedforward or recurrent baselines for all depths, with and without skips.
D CAPACITY COMPARISONS
In this section, we present some additional experiments comparing the predictron to more conventional deep networks. The purposes of these experiments are 1) to show that the conclusions obtained above do not depend on the precise architecture used, and 2) to show that the structure of the network—whether we use a predictron or not—is more important than the raw number of parameters. Specifically, we again consider the same 20 by 20 random mazes, and the pool task described in the main text. As described in Section A, for the results in the paper we used an encoder that preserved the size of the input plans, 20 × 20 for the mazes and 28 × 28 for pool. Each convolution had 32 channels and therefore the abstract states were 20 × 20 × 32 for the mazes and 28 × 28 × 32 for pool. We now consider a different architecture, where we no longer pad the convolutions used in the encoder. For the mazes, we still use two layers of 3 × 3 stride-1 convolutions, which means the planes reduce in size to 16× 16. This means that the abstract states are about one third smaller. For pool, we use three 5×5 stride-1 convolutions, which bring us from 28×28 down to 16×16 as well. So, the abstract states are now of equal size for both experiments. For pool, this is approximately a two-thirds reduction, which helps reduce the compute needed to run the model. Most of the parameters in the predictron are in the fully connected layers. Previously, the first fully connected layer for each of the internal values, rewards, discounts, and λ-parameters would take a flattened abstract state, and then go into 32 hidden nodes. This means the number of parameters in this layer were 20× 20× 32× 32 = 409, 600 for the mazes and 28× 28× 32× 32 = 802, 816 for pool. The predictron with shared core would have four of these layers, one for each of the internal values, rewards, discounts, and λs, compared to one for the deep network which only has values. We change this in two ways. First, we add a 1× 1 convolution with a stride of 1 and 8 channels before the first fully connected layer for each of these outputs. This reduces the number of channels, and therefore the number of parameters in the subsequent fully-connected layer, by one fourth. Second, we tested three different numbers of hidden nodes: 32, 128, or 512.
The deep network with 128 hidden nodes for its values has the exact same number of parameters as the (r, γ, λ)-predictron with 32 hidden nodes for each of its outputs. Before, the deep network had fewer parameters, because we kept this number fixed at 32 across experiments. This opens the question of whether the improved performance of the predictron was not just an artifact of having more parameters. We tested this hypothesis, and the results are shown in Figure 9. Figure 9 shows that in each setting—on the mazes and pool, and with or without shared cores— both. The predictrons always performed better than all the deep networks. This includes the 32 node predictron (darkest red) compared to the 512 node deep network (lightest blue), even though the latter has approximately 4 times as many parameters (1.27M vs 4.85M). This means that the number of parameters mattered less than whether or not we use a predictron.
E ADDITIONAL DOMAIN DETAILS
We now provide some additional details of domains.
E.1 POOL
To generate sequences in the Pool domain, the initial locations of 4 balls of different colours are sampled at random. The white ball is the only one moving initially. Its velocity has a norm sampled uniformly between 7 and 14. The initial angle is sampled uniformly in the range (0, 2π). From the initial condition, the Mujoco simulation is run forward until all balls have stopped moving; sequences that last more than 151 frames are rejected, and a new one is generated as replacement. Each frame is rendered by Mujoco as a 280x280 RGB image, and subsequently downsampled through bilinear interpolation to a 28x28 RGB input (see Figure 10 for an example). Since the 280 signals described in Section 6.1 as targets for the Pool experiments have very different levels of sparsity, resulting in values
with very different scales, we have normalised the pseudo returns. The normalization procedure consisted in dividing all targets by their standard deviation, as empirically measured across an initial set of 20,000 sequences.
E.2 RANDOM MAZES
To generate mazes we first determine, with a stochastic line search, a number of walls so that the topleft corner is connected to the bottom-right corner (both always forced to be empty) in approximately 50% of the mazes. We then shuffle the walls uniformly randomly. For 20 by 20 mazes this means 70% of locations are empty and 30% contain walls. More than a googol different such 20-by-20 mazes exist (as ( 398 120 ) > 10100). | 1. What is the main contribution of the paper in terms of computational structure and function approximation?
2. How does the proposed approach differ from other baselines or prediction tasks that require sequential reasoning?
3. What are the strengths of the paper regarding its novelty and experimental methodology?
4. Are there any areas where the paper's language or explanation could be improved for better clarity and understanding? | Review | Review
This work proposes a computational structure of function approximator with a strong prior: it is optimized to act as an abstract MRP, capable of learning its own internal state, model, and notion of time-step. Thanks to the incorporation of a \lambda-return style return estimation, it can effectively adapt its own "thinking-depth" on the current input, thus performing some sort of soft iterative inference.
Such a prior, maintained by strong regularization, helps perform better than similar baselines or some prediction tasks that require some form of sequential reasoning.
The proposed idea is novel, and a very interesting take on forcing internal models upon function approximators which begs for future work. The experimental methodology is complete, showcases the potential of the approach, and nicely analyses the iterative/adaptative thinking depth learned by the model.
As pointed out by my previous comments, the paper reads well but utilizes language that may confuse a reader unfamiliar with the subject. I think some rewording could be done without having much impact on the depth of the paper. In particular, introducing the method as a regularized model pushed to act like an MRP, rather than an actual MRP performing some abstract reasoning, may help confused readers such as myself. |
ICLR | Title
Provable Robust Learning for Deep Neural Networks under Agnostic Corrupted Supervision
Abstract
Training deep neural models in the presence of corrupted supervisions is challenging as the corrupted data points may significantly impact the generalization performance. To alleviate this problem, we present an efficient robust algorithm that achieves strong guarantees without any assumption on the type of corruption, and provides a unified framework for both classification and regression problems. Different from many existing approaches that quantify the quality of individual data points (e.g., loss values) and filter out data points accordingly, the proposed algorithm focuses on controlling the collective impact of data points on the averaged gradient. Even when a corrupted data point failed to be excluded by the proposed algorithm, the data point will have very limited impact on the overall loss, as compared with state-of-the-art filtering data points based on loss values. Extensive empirical results on multiple benchmark datasets have demonstrated the robustness of the proposed method under different types of corruptions.
1 INTRODUCTION
Corrupted supervision is a common issue in real-world learning tasks, where the learning targets are not accurate due to various factors in the data collection process. In deep learning models, such corruptions are especially severe, whose degree-of-freedom makes them easily memorize corrected examples and susceptible to overfitting (Zhang et al., 2016).
There are extensive efforts to achieve robustness against corrupted supervisions. A natural approach to deal with corrupted supervision in deep neural networks (DNNs) is to reduce the model exposure to corrupted data points during training. By detecting and filtering (or re-weighting) the possible corrupted samples, the learning is expected to deliver a model that is similar to the one trained on clean data (without corruption) (Kumar et al., 2010; Han et al., 2018; Zheng et al., 2020). There are different criteria designed to identify the corrupted data points in training. For example, Kumar et al. (2010); Han et al. (2018); Jiang et al. (2018) leveraged the loss function values of data points; Zheng et al. (2020) tapped prediction uncertainty for filtering data; Malach & Shalev-Shwartz (2017) used the disagreement between two deep networks; Reed et al. (2014) utilized the prediction consistency of neighboring iterations. The success of these methods highly depends on the effectiveness of the detection criteria in correctly identifying the corrupted data points. Since the corrupted labels remain unknown throughout the learning, such “unsupervised” detection approaches may not be effective, either lack theoretical guarantees of robustness (Han et al., 2018; Reed et al., 2014; Malach & Shalev-Shwartz, 2017; Li et al., 2017) or provide guarantees under assumptions of the availability of prior knowledge about the type of corruption (Zheng et al., 2020; Shah et al., 2020; Patrini et al., 2017; Yi & Wu, 2019). Besides, another limitation of many existing approaches is that, they are exclusively designed for classification problems (e.g., Malach & Shalev-Shwartz (2017); Reed et al. (2014); Menon et al. (2019); Zheng et al. (2020)) and are not straightforward to extend to solve regression problems.
To tackle these challenges, this paper presents a unified optimization framework with robustness guarantees without any assumptions on how supervisions are corrupted, and is applicable to both classification and regression problems. Instead of developing an accurate criterion for detection corrupted samples, we adopt a novel perspective and focus on limiting the collective impact of corrupted samples during the learning process through robust mean estimation of gradients. Specifically, if our estimated average gradient is close to the gradient from the clean data during the learning iterations,
then the final model will be close to the model trained on clean data. As such, a corrupted data point can still be used during the training when it does not considerably alter the averaged gradient. This observation has remarkably impact on our algorithm design: instead of explicitly quantifying (and identifying) individual corrupted data points, which is a hard problem in itself, we are now dealing with an easier task, i.e., eliminating training data points that significantly distort the mean gradient estimation. One immediate consequence of this design is that, even when a corrupted data point failed to be excluded by the proposed algorithm, the data point is likely to have very limited impact on the overall loss, as compared with state-of-the-art filtering data points based on loss values. We perform experiments on both regression and classification with corrupted supervision on multiple benchmark datasets. The results show that the proposed method outperforms state-of-the-art.
2 BACKGROUND
Learning from corrupted data (Huber, 1992) has attracted considerable attention in the machine learning community (Natarajan et al., 2013). Many recent studies have investigated robustness of classification tasks with noisy labels. For example, Kumar et al. (2010) proposed a self-paced learning (SPL) approach, which assigns higher weights to examples with smaller loss. A similar idea was used in curriculum learning (Bengio et al., 2009), in which the model learns easy samples first before learning harder ones. Alternative methods inspired by SPL include learning the data weights (Jiang et al., 2018) and collaborative learning (Han et al., 2018; Yu et al., 2019). Label correction (Patrini et al., 2017; Li et al., 2017; Yi & Wu, 2019) is another approach, which revises original labels in data with a goal to recover clean labels from corrupt ones. However, since we do not have access to which data points are corrupted, it is hard to get provable guarantees for label correction without strong assumptions regarding the corruption type.
Accurate estimation of gradients is a key step for successful optimization. The relationship between gradient estimation and its final convergence has been widely studied in the optimization community. Since computing an approximated (and potentially biased) gradient is often more efficient than computing the exact gradient, many studies used approximated gradients to optimize their models and showed that they suffer from the biased estimation problem if there is no assumptions on the gradient estimation (d’Aspremont, 2008; Schmidt et al., 2011; Bernstein et al., 2018; Hu et al., 2020; Ajalloeian & Stich, 2020).
A closely related topic is robust estimation of the mean. Given corrupted data, robust mean estimation aims at generating an estimated mean µ̂ such that the difference between the estimated mean on corrupted data and the mean of clean data ‖µ̂− µ‖2 is minimized. It was showed that median or trimmed-mean are the optimal statistics for mean estimation in one-dimensional data (Huber, 1992). However, robustness in high dimension is quite challenging since applying the coordinate-wise optimal robust estimator would lead to an error factor O( √ d) that scales with the data dimension. Although some classical work, such as Tukey median (Tukey, 1975), successfully designed algorithms to get rid of the O( √ d) error, the algorithms themselves are not polynomial-time algorithm. More recently, Diakonikolas et al. (2016); Lai et al. (2016) successfully designed polynomial-time algorithms with dimension-free error bounds. The results have been widely applied to improve algorithmic efficiency in various scenarios (Dong et al., 2019; Cheng et al., 2020).
Robust optimization aims to optimize the model given corrupted data. Many previous studies improve the robustness of the optimization in different problem settings. However, most of them either study linear regression and its variantes(Bhatia et al., 2015; 2017; Shen & Sanghavi, 2019) or study the convex optimization (Prasad et al., 2018). Thus, those results cannot be directly generalized to deep neural networks. Diakonikolas et al. (2019) is a very generalized non-convex optimization method with the agnostic corruption guarantee. However, the space complexity of the algorithm is high, thus cannot be applied to deep neural networks given current hardware limitations.
3 METHODOLOGY
Before introducing our algorithm, we first discuss the corrupted supervision. To characterize agnostic corruptions, we make use of an adversary that tries to corrupt the supervision of a clean data. There is no limitation on how the adversary corrupts the supervision, which can either be randomly permuting the target, or in a way that maximizes the negative impact (i.e., lower performance).
Firstly, the adversary can choose up to fraction of the clean target Dy ∈ Rn×q and change the selected row of Dy to arbitrary valid numbers, generating D y ∈ Rn×q . Then, the adversary returns the corrupted dataset Dx, D y to our learning algorithmA. In this process, the only constraint on the adversary is the fraction, and the adversary has full knowledge of the data, and even the learning algorithm A. A natural question to ask is: Given a data set with -fraction corrupted supervision Dx ∈ Rn×p, D y , and a learning objective φ : Rp × Rq × Rd → R parameterized by θ, can we output parameters θ ∈ Rd such that ‖∇θφ(θ;Dx,Dy)‖ is minimized. When = 0, we have D y = Dy and learning is done on the clean data. The stochastic gradient descent could converge to a stationary point, where ‖∇θφ(θ;Dx,Dy)‖ = 0. However, when the supervision is corrupted as above, this is not the case any more, due to the error in θ impacted by the corrupted data. We thus want an efficient algorithm to find a model θ that minimizes ‖∇θφ(θ;Dx,Dy)‖. A robust model θ should have a small value of ‖∇θφ(θ;Dx,Dy)‖, and we hypothesize that a smaller ‖∇θφ(θ;Dx,Dy)‖ has better generalization.
3.1 STOCHASTIC GRADIENT DESCENT WITH BIASED GRADIENT
A direct consequence of corrupted supervision is biased gradient estimation. In this section, we will first analyze how such biased gradient estimation affects the robustness of learning. The classical analysis of stochastic gradient descent (SGD) requires access to the stochastic gradient oracle, which is an unbiased estimation of the true gradient. However, corrupted supervision leads to corrupted gradients, and it is thus difficult to get unbiased gradient estimation without assumptions of how the gradients are corrupted. We start the analysis by the following informal theorem (without elaborated discussions of assumptions) of how biased gradient affects the final convergence of SGD. Its formal version is provided in Theorem 4, Appendix.
Theorem 1 (Convergence of Biased SGD (Informal)) Under mild assumptions, denote ζ to be the maximum `2 norm of the difference between clean minibatch gradient and corrupted minibatch gradient ‖g− g̃‖ ≤ ζ, then by using biased gradient estimation, SGD converges to the ζ-approximated stationary points: E‖∇φ(θt)‖2 = O(ζ2). Remark 1 In the corrupted supervision setting, let the gradient estimated by corrupted data D be ĝ, the gradient estimated by clean data D be g. Assume ‖g̃ − g‖ ≤ ζ, it follows that when using corrupted dataset in SGD, it converges to the ζ-approximated stationary point of the objective defined by the clean data. Note the difference between above theorem and typical convergence theorem is that we are using a biased gradient estimation.
According to Theorem 1 and the remark, a robust estimation of the gradient g is the key to ensure a robust model (converge to the clean solution). We also assume the loss function has the form of L(y, ŷ), where many commonly used loss functions fall in this category.
3.2 ROBUST GRADIENT ESTIMATION FOR GENERAL DATA CORRUPTION
We first introduce Algo. 2 for general corruption (i.e. corruption on both features and/or supervisions). The algorithm excludes the data points with large gradient norms, and uses the empirical mean of the remaining to update gradients. In Thm. 2 we give its robustness property.
Algorithm 1: Robust Mean Estimation for Corrupted Gradient input: gradient matrix G ∈ m× d, corruption rate return estimated mean µ̂ ∈ Rd ; 1. For each row zi in G, calculate the l2 norm ‖zi‖ 2. Choose the -fraction rows with large ‖zi‖ 3. Remove those selected rows, and return the empirical mean of the rest points as µ̂.
Assumption 1 (Individual L-smooth loss) For every individual loss function φi, there exists constant L > 0, such that for a clean sample i, we have |φi(x) − φi(y)| ≤ L|x − y| for any x,y.
Theorem 2 (Robust Gradient Estimation For Data Corruption) Let G̃ ∈ Rm×d be a corrupted gradient matrix, and G ∈ Rm×d be the clean gradient matrix. Let µ be the empirical mean function,
Algorithm 2: (PRL(G)) Provable Robust Learning for General Corrupted Data input: Label corrupted dataset Dx,D y , learning rate γt; return model parameter θ; for t = 1 to maxiter do
Randomly sample a minibatch M from Dx,D y Calculate the individual gradient G̃ for M Apply Algorithm 1 on G̃ to get robust gradient estimation µ̂ Update model θt+1 = θt − γtµ̂
end we have that the output of Algo. 1 µ̂ of G̃ satisfies ‖µ(G) − µ̂‖ = O( √ d). Moreover, if Asm. 1 holds, we further have ‖µ(G)− µ̂‖ = O( L).
Combining with the aforementioned convergence analysis of biased SGD, we get the following:
Corollary 1 (Robust Optimization For Corrupted Data) Given assumptions used in Thm. 1, and Asm. 1, applying Algo. 1 to any -fraction corrupted data, we get mint∈[T ] E‖∇φ(xt)‖ = O( L) with large enough T . If Asm. 1 does not hold, then we get mint∈[T ] E‖∇φ(xt)‖ = O( √ d) with large enough T .
The robustness guarantee states that even training on generally corrupted data (corrupted supervision is a special case), Algo. 2 guarantee that the gradient norm on remaining data cannot be too large. Since Thm. 2 gives a dimension-free error bound when Asm. 1 holds, Corollary 1 also gives the dimension-free robustness guarantee with Asm. 1. We defer the detailed discussion ofO( L) to later sections. Although the error bound O( L) sounds good, we note that it still has several drawbacks: First, the dimension-free error bound means the error does not grow with increasing dimensions, and is critical when working with neural networks, due to the extremely large gradient dimension (i.e., #parameters of neural network). Thm. 2 gives the dimension-free error bound only when Asm. 1 holds, which is quite strong. In addition, even when Asm. 1 holds, L can be large, leading to a large gradient estimation error. Existing work (Diakonikolas et al., 2019) already acheives the dimensionfreeO( √ ) guarantee with general corruptions, which is a much more better theoretical results than above theorem. However, in practice, we found that the gradient norms of deep neural networks for individual data points are usually not very large, even at the beginning of the training. This can be partially due to the network structure. Further discussion on this issue is beyond the scope of this paper, but the theoretical bound above states that the robustness should depend on the number of parameters for the general models.
Another concern of Alg. 2 is the efficiency. It requires computing individual gradients. Although there are some advanced approaches to get the individual gradient, e.g., (Goodfellow, 2015), it is still relatively slow as compared to commonly used back-propagation. Moreover, these methods are usually not compatible with popular components such as batch normalization (BN) since the individual gradients are not independent inside BN, using of which will lose the benefits from parallelization.
3.3 ROBUST GRADIENT ESTIMATION FOR ONE DIMENSIONAL CORRUPTED SUPERVISION
In this section, we show that the above robustness bound can be improved if we assume the corruption only comes from supervision. Also, by fully exploiting the gradient structure of the corrupted supervision, our algorithm can be much more efficient and meanwhile compatible with batch normalization. We use the one dimensional supervision setting (binary classification or single-target regression) to illustrate this intuition and extend it more general settings in the next section. Consider a high-dimensional supervised learning problem with X ∈ Rn×p and y ∈ Rn. The goal is to learn a function f parameterized by θ ∈ Rd minimizing the following loss minθ ∑n i=1 φi =
minθ ∑n i=1 L(yi, f(xi, θ)). The gradient for a data point i is ∇θφi = ∂li ∂fi ∂fi ∂θ = αigi.
One key observation is that: when only supervision is corrupted, then the corruption contributes only to the term αi = ∂li∂fi , which is a scalar in the one-dimensional setting. In other words, given the clean gradient of ith point, gi ∈ Rd, the corrupted supervision can only perturbs the the length of the gradient vector, changing the gradient from αigi to δigi, where δi =
∂l i ∂fi . When αi and δi are
known, then we can easily eliminate the impact from corrupted supervision. But this is not the case since we have have only the possibly corrupted target ŷi as opposed to the ground truth yi.
On the other hand, the fact that corrupted supervision scales the clean gradient can be used to reshape the robust optimization problem. Recall that in every iteration, we update our model by θ+ = θ− γµ(G), where µ denotes the empirical mean function and G = [∇θφT1 , . . . ,∇θφTm] ∈ Rm×d is the gradient matrix with mini-batch size m. We then have the following:
Problem 1 (Robust Gradient Estimation for Corrupted Supervision - One Dimensional Case) Given a clean gradient matrix G ∈ Rm×d, an -corrupted matrix G̃ with at most -fraction rows are corrupted from αigi to δigi, design an algorithmA : Rm×d → Rd that minimizes ‖µ(G)−A(G̃)‖.
Note that when ‖δi‖ is large, the corrupted gradient will have large effect on the empirical mean, and vice versa. This motivates us to develop an algorithm that filters out data points by the loss layer gradient ‖ ∂li∂fi ‖. If the norm of the loss layer gradient of a data point is large (in one-dimensional case, this gradient reduces to a scalar and the norm becomes its absolute value), we exclude the data point when computing the empirical mean of gradients for this iteration. Note that this algorithm is applicable to both regression and classification problems. Especially, when using the mean squared error (MSE) loss for regression, its gradient norm is exactly the loss itself, and the algorithm reduces to self-paced learning Kumar et al. (2010). We summarize the procedure in Alg. 3 and extend it to the more general multi-dimension case in the next section.
Algorithm 3: (PRL(L)) Efficient Provable Robust Learning for Corrupted Supervision input: dataset Dx,D y with corrupted supervision, learning rate γt; return model parameter θ; for t = 1 to maxiter do
Randomly sample a minibatch M from Dx,D y Compute the predicted label Ŷ from M Calculate the gradient norm for the loss layer, (i.e. ‖ŷ − y‖ for mean square error or cross entropy)
for each data point in M Remove the top τ -fraction data from M according to ‖ŷ − y‖ Return the empirical mean of the remaining M as the robust mean estimation µ̂ Update model θt+1 = θt − γtµ̂
end
3.4 EXTENSION TO MULTI-DIMENSIONAL CORRUPTED SUPERVISION
To extend our algorithm and analysis to multi-dimensional case, let q to be the supervision dimension, the gradient for each data point is ∇θφi = ∂li∂fi ∂fi ∂θ , where ∂li ∂fi ∈ Rq is the gradient of loss respect to model outputs, and ∂fi∂θ ∈ R q×d is the gradient of model outputs respect to model parameters. Similarly, when the supervision is corrupted, the corruption comes from the term ∂li∂fi , which is a vector. Let δi = ∂l i ∂fi ∈ Rq , αi = ∂li∂fi ∈ R q , Wi = ∂fi∂θ ∈ R q×d, m be the minibatch size. Denote the clean gradient matrix G ∈ Rm×d, where the ith row of gradient matrix gi = αiWi. Now the multi-dimensional robust gradient estimation problem is defined by:
Problem 2 (Robust Gradient Estimation for Corrupted Supervision - Multi-Dimensional Case) Given a clean gradient matrix G, an -corrupted matrix G̃ with at most -fraction rows are corrupted from αiWi to δiWi, design an algorithmA : Rm×d → Rd that minimizes ‖µ(G)−A(G̃)‖.
We start our analysis by investigating the effects of the filtering-base algorithm, i.e. use the empirical mean gradient of (1 − )-fraction subset to estimate the empirical mean gradient of clean gradient matrix. We have the following for a randomized filtering-based algorithm(proof in Appendix):
Lemma 1 (Gradient Estimation Error for Random Dropping -fraction Data) Let G̃ ∈ Rm×d be a corrupted matrix generated as in Problem 2, and G ∈ Rm×d be the original clean gradient matrix. Suppose arbitrary (1− )-fraction rows are selected from G̃ to form the matrix N ∈ Rn×d. Let µ be the empirical mean function. Assume the clean gradient before loss layer has bounded
operator norm, i.e., ‖W‖op ≤ C, the maximum clean gradient in loss layer maxi∈G ‖αi‖ = k, the maximum corrupted gradient in loss layer maxi∈N ‖δi‖ = v, then we have:
‖µ(G)− µ(N)‖ ≤ Ck 3 − 4 2
1− + Cv 1− .
We see that v is the only term that is related to the corrupted supervision. If v is large, then the bound is not safe since the right-hand side can be arbitrarily large (i.e. an adversary can change the label in a way such that v is extremely large). Thus controlling the magnitude of v provides a way to effectively reduce the bound. For example, if we manage to control v ≤ k, then the bound is safe. This can be achieved by sorting the gradient norms at the loss layer, and then discarding the largest -fraction data points. We thus have the following result.
Theorem 3 (Robust Gradient Estimation For Supervision Corruption) Let G̃ be a corrupted matrix generated in Problem 2, q be the label dimension, µ be the empirical mean of clean matrix G. Assume the maximum clean gradient before loss layer has bounded operator norm: ‖W‖op ≤ C, then the output of gradient estimation in Algo 3 µ̂ satisfies ‖µ− µ̂‖ = O( √q) ≈ O( ).
Compare Thm. 2 and Thm. 3, we see that when the corruption only comes from supervision, the dependence on d is reduced to q, where in most deep learning cases we have d n. Applying Thm 1 directly shows that our algorithm is also robust in multi-label settings.
3.5 COMPARISON WITH DIAKONIKOLAS ET AL. (2019) AND OTHER METHODS
SEVER (Diakonikolas et al., 2019) showed promising state-of-the-art theoretical results in general corruptions, which achievesO( √ ) dimension-free guarantee for general corruptions. Compared to Diakonikolas et al. (2019), we have two contributions: a). By assuming the corruption comes from the label (we admit that this is quite strong compared to the general corruption setting), we could get a better error rate. b). Our algorithm can be scaled to deep neural networks while Diakonikolas et al. (2019) cannot. We think this is a contribution considering the DNN based models are currently state-of-the-art methods for noisy label learning problems (at least in empirical performance).
Although Diakonikolas et al. (2019) achieves very nice theoretical results, unfortunately, it cannot be applied to DNN with the current best hardware configuration. Diakonikolas et al. (2019) uses dimension-free robust mean estimation breakthroughs to design the learning algorithm, while we notice that most robust mean estimation relies on filtering out data by computing the score of projection to the maximum singular vector. For example, in Diakonikolas et al. (2019), it requires performing SVD on n×d individual gradient matrix, where n is the sample size and d is the number of parameters. This method works well for small datasets and small models since both n and d is small enough for current memory limitation. However, for deep neural networks, this matrix size is far beyond current GPU memory capability. That could be the potential reason why in Diakonikolas et al. (2019), only ridge regression and SVM results for small data are shown (we are not saying that they should provide DNN results). In our experiment, our n is 60000 and d is in the magnitude of millions (network parameters). It is impractical to store 60000 copies of neural networks in a single GPU card. In contrast, in our algorithm, we do not need to store the full gradient matrix. By only considering the loss-layer gradient norm, we can easily extend our algorithm to DNN, and we showed that this simple strategy works well in both theory and challenging empirical tasks.
We notice that there are some linear (Bhatia et al., 2015; 2017) or convex method (Prasad et al., 2018) achieves the better robustness guarantee. However, most of them cannot be directly applied to deep neural networks.
4 RELATIONSHIP TO SELF-PACED LEARNING (SPL)
SPL looks very similar to our method at first glance. Instead of keeping data point with small gradient norm, SPL tries to keep data with small loss. The gradient norm and loss function can be tied by the famous Polyak-Łojasiewicz (PL) condition. The PL condition assumes there exists some constant s > 0 such that 12‖∇φ(x)‖
2 ≥ s (φ(x)− φ∗) , ∀x holds. As we can see, when the neural network is highly over-parameterized, the φ∗ can be assumed to be equal across different
samples since neural networks can achieve 0 training loss (Zhang et al., 2016). By sorting the error φ(xi) for every data point, SPL actually is sorting the lower bound of the gradient norm if the PL condition holds. However, the ranking of gradient norm and the ranking loss can be very different since there is no guarantee that the gradient norm is monotonically increasing with the loss value. We provide illustration of why SPL is not robust from geometric perspective in the appendix. Here we show even for simple square loss, the monotonic relationship is easy to break. One easy counter-example is φ(x1, x2) = 0.5x21 + 50x 2 2. Take two points (1000, 1) and (495, - 49.5), we will find the monotonic relationship does not hold for these two points. Nocedal et al. (2002) showed that the monotonic relationship holds for square loss (i.e.φ(x) = 12 (x−x
∗)TQ(x− x∗) ) if the condition number of Q is smaller than 3 + 2 √ 2, which is a quite strong assumption especially when x is in high-dimension. If we consider the more general type of loss function (i.e. neural network), the assumptions on condition number should only be stronger, thus breaking the monotonic relationship. Thus, although SPL sorts the lower bound of the gradient norm under mild assumptions, our algorithm is significantly different from the proposed SPL and its variations.
Now, we discuss the relationship between SPL and algorithm 3 under supervision corruptions. SPL has the same form as algorithm 3 when we are using mean square error to perform regression tasks since the loss layer gradient norm is equal to loss itself. However, in classification, algorithm 3 is different from the SPL. In order to better understand the algorithm, we further analyze the difference between SPL and our algorithm for cross-entropy loss.
For cross entropy, denote the output logit as o, we have H(yi, fi) = −〈yi, log(softmax(oi))〉 = −〈yi, log(fi)〉. The gradient norm of cross entropy w.r.t. oi is: ∂Hi ∂oi
= yi− softmax(oi) = fi−yi. Thus, the gradient of loss layer is the MSE between yi and fi. Next, we investigate when MSE and Cross Entropy gives non-monotonic relationship. For the sake of simplification, we only study the sufficient condition of the non-monotonic relationship, which is showed in lemma 2.
Lemma 2 Let y ∈ Rq , where yk = 1,yi = 0 for i 6= k, and α, β are two q dimensional vector in probability simplex. Without loss of generality, suppose α has smaller cross entropy loss αk ≥ βk, then the sufficient condition for ‖α − y‖ ≥ ‖β − y‖ is Vari 6=k({αi}) − Vari 6=k({βi}) ≥
q (q−1)2 ((αk − βk)(2− αk − βk))
As αk ≥ βk, the right term is non-negative. In conclusion, when MSE generates a different result from cross-entropy, the variance of the probability of the non-true class of the discarded data point is larger. Suppose we have a ground-truth vector y = [0, 1, 0, 0, 0, 0, 0, 0, 0, 0], and we have two predictions α = [0.08, 0.28, 0.08, 0.08, 0.08, 0.08, 0.08, 0.08, 0.08, 0.08] and β = [0.1, 0.3, 0.34, 0.05, 0.05, 0.1, 0.03, 0.03, 0, 0]. The prediction α have a smaller mse loss while prediction β have a smaller cross-entropy loss. It is intuitive that β is more likely to be noisy data since it has two peak on the prediction (i.e. 0.3, 0.34). However, since cross entropy loss only considers one dimension, it cannot detect such situation. Compared to the cross-entropy, the gradient (mse loss) considers all dimension, and thus will consider the overall prediction distributions.
5 COMBINING WITH CO-TEACHING STYLE TRAINING
Motivated by co-teaching (Han et al., 2018), which is one of currently state-of-the-art deep methods for learning under noisy label, we propose Co-PRL(L), which has the same framework of coteaching but uses the loss-layer gradient to select the data. The full algorithm is shown in algorithm 4 in the appendix. The meaning of all hyper-parameters in algorithm 4 are all the same as in the original Han et al. (2018). Compared with algorithm 3, except sampling data according to the loss layer gradient norm, the Co-PRL(L) has two other modules. The first is we gradually increase the amount of the data to be dropped. The second is that two networks will exchange the selected data to update their own parameters.
6 EXPERIMENT
In this section, we perform experiments on benchmark regression and classification dataset. The code is available in supplementary materials of submission. We compare PRL(G)(Algo. 2), PRL(L)
(Algo. 3), and Co-PRL(L) (Algo. 4) to the following baselines. Standard: standard training without filtering data (mse for regression, cross entropy for classification); Normclip: standard training with norm clipping; Huber: standard training with huber loss (regression only); Decouple: decoupling network, update two networks by using their disagreement (Malach & Shalev-Shwartz, 2017) (classification only); Bootstrap: It uses a weighted combination of predicted and original labels as the correct labels, and then perform back propagation (Reed et al., 2014) (classification only); Min-sgd: choosing the smallest loss sample in minibatch to update model (Shah et al., 2020); SPL: self-paced learning, dropping the data with large losses (same as PRL(L) in regression setting with MSE loss); Ignormclip: clipping individual gradient then average them to update model (regression only); Co-teaching: collaboratively train a pair of SPL model and exchange selected data to another model(Han et al., 2018) (classification only); It is hard to design experiments for agnostic corrupted supervision and we tried our best to include different types of supervision noise. The supervision corruption settings are as follows: linadv: the corrupted supervision is generated by random wrong linear relationship of features (regression); signflip: the supervision sign is flipped (regression); uninoise: random sampling from uniform distribution as corrupted supervision (regression); mixture: mixture of above types of corruptions (regression); pairflip: shuffle the coordinates (i.e. eyes to mouth in celebA or cat to dog in CIFAR) (regression and classification); symmetric: randomly assign wrong class label (classification). For classification, we use classification accuracy as the evaluation metric, and R-square is used to evaluate regression experiments. Due to the limit of the space, we only show the average evaluation score on testing data for the last 10 epochs. The whole training curves are attached in the appendix. All experiments are repeated 5 times for regression experiments and 3 times for classification experiments.Main hyperparameters are showed in appendix.
6.1 REGRESSION EXPERIMENT
We use CelebA data to perform regression tasks. CelebA dataset has 162770 training images, 19867 validation images, 19962 test images. The target variable is ten-dimensional coordinates of the left eye, right eye, nose, left mouth, and right mouth. Given the human face image, the goal is to predict 10 face landmark coordinates in the image. We tried adding different types of noise on the landmark coordinates. We preprocess the CelebA data as following: we use three-layer CNN to train 162770 training images to predict clean coordinates (we use 19867 validation images to do the early stopping). Then, we use well-trained network to extract the 512-dimensional feature on testing sets. Thus, our final data to perform experiment has feature sets X ∈ R19962×512, and the target variable Y ∈ R19962×10. We further split the data to the training and testing set, where training sets contain 80% of the data. Then, we manually add linadv, signflip, uninoise, pairflip, mixture types of supervision noise on the target variable on training data. The corruption rate for all types of corruption is varied from 0.1 to 0.4. We use 3-layer fully connect networks in experiments. The results of averaged last 10 epoch r-square are in table 1.
6.2 CLASSIFICATION EXPERIMENT
We perform experiments on CIFAR10, and CIFAR100 to illustrate the effectiveness of our algorithm in classification setting. We use the 9-layer Convolutional Neural Network, which is the same as Han et al. (2018). Since most baselines include batch normalization, it is difficult to get individual gradient efficiently, we will drop the ignormclip and PRL baselines. In the appendix, we attached the results if both co-teaching and Co-PRL(L) drops batch normalization module. We will see that coteaching cannot maintain robustness while our method still has robustness. The reason is discussed in the appendix. We consider pairflip and symmetric supervision corruptions in experiments. Also, to compare with the current state of the art method, for symmetric noise, we use corruption rate which beyond 0.5. Although our theoretical analysis assumes the noise rate is small than 0.5, when the noise type is not adversary (i.e. symmetric), we empirically show that our method can also deal with such type of noise. Results on CIFAR10, CIFAR100 are in Table 2. As we can see, no matter using one network (PRL vs SPL) or two networks (Co-PRL(L) vs Co-teaching), our method performs significantly better. Since in real-world problems, it is hard to know that the ground-truth corruption rate, we also perform the sensitivity analysis in classification tasks to show the effect of overestimating and underestimating . The results are in Table 3. More discussion about sensitivity analysis can be found in appendix.
7 CONCLUSION
In this paper, we proposed efficient algorithm to defense against agnostic supervision corruptions. Both theoratical and empirical analysis showed the effectiveness of our algorithm. There are two remaining questions in this paper which deserves study in future. The first one is whether we can further improve O( ) error bound or show that O( ) is tight. The second one is to utilize more properties of neural networks, such as the sparse gradient, to see whether it is possible to get better algorithms.
A APPENDIX
A.1 CO-IGFILTER ALGORITHM
See algorithm 4.
Algorithm 4: Co-PRL(L) input: initialize wf and wg , learning rate η, fixed τ , epoch Tk and Tmax, iterations Nmax return model parameter wf and wg; for T = 1, 2, ..., Tmax do
for N = 1, ..., Nmax do random sample a minibatch M from Dx,D y (noisy dataset) get the predicted label Ŷf and Ŷg from M by wf . wg calculate the individual loss lf = L(Y, Ŷf ), lg = L(Y, Ŷg) calculate the gradient norm of loss layer scoref = ‖
∂lf ∂ŷf ‖, scoreg = ‖ ∂lg ∂ŷg ‖.
sample R(T )% small-loss-layer-gradient-norm instances by scoref and scoreg to get Nf , Ng update wf = wf − η∇wfL(Nf , wf ), wg = wg − η∇wgL(Ng, wg) (selected dataset) update model xt+1 = xt − γtµ̂
end Update R(T ) = 1−min { T
Tk τ, τ } end
A.2 FURTHER ILLUSTRATION OF THE DIFFERENCE BETWEEN SPL AND PRL(G)
In this section, we will further illustrate the difference between SPL and PRL(G). In order to have a more intuitive understanding of our algorithm, we could look at the Figure 1a and 1b. Since we are in the agnostic label corruption setting, it is difficult to filtering out the correct corrupted data. We showed two situations when loss filtering failed and gradient filtering failed. As we could see that when loss filtering method failed, the remaining corrupted data could have large impact on the overall loss surface while when gradient filtering method failed, the remaining corrupted data only have limited impact on the overall loss surface, thus gaining robustness.
A.3 NETWORKS AND HYPERPARAMETERS
The hyperparameters are in Table 4. For Classification, we use the same hyperparameters in Han et al. (2018). For CelebA, we use 3-layer fully connected network with 256 hidden nodes in hidden layer and leakly-relu as activation function. We also attached our code in supplementary materials.
A.4 REGRESSION R2 ON TESTING DATA CURVE
The curve for CelebA data is showed in Figure 2.
A.5 CLASSIFICATION CURVE
The classification curve is in Figure 3
A.6 SENSITIVITY ANALYSIS
Since in real-world problems, it is hard to know that the ground-truth corruption rate, we perform the sensitivity analysis in classification tasks to show the effect of . The results are in Table 5. As we could see, the performance is stable if we overestimate the corruption rate, this is because only
when we overestimate the , we could guarantee that the gradient norm of the remaining set is small. However, when we underestimate the corruption rate, in the worst case, there is no guarantee that the gradient norm of the remaining set is small. By using the empirical mean, even one large bad individual gradient would ruin the gradient estimation, and according to the convergence analysis of biased gradient descent, the final solution could be very bad in terms of clean data. That explains why to underestimate the corruption rate gives bad results. Also, from Table 5, we could see that using the ground truth corruption rate will lead to small uncertainty.
A.7 EMPIRICAL RESULTS ON RUNNING TIME
As we claimed in paper, the algorithm 2 is not efficient. In here we attached the execution time for one epoch for three different methods: Standard, PRL(G), PRL(L). For fair comparison, we replace all batch normalization module to group normalization for this comparison, since it is hard
to calculate individual gradient when using batch normalization. For PRL(G), we use opacus libarary (https://opacus.ai/) to calculate the individual gradient.
The results are showed in Table 6
A.8 PROOF OF CONVERGENCE OF BIASED SGD
We gave the proof of the theorem of how biased gradient affect the final convergence of SGD. We introduce several assumptions and definition first:
Assumption 2 (L-smoothness) The function φ: Rd → R is differentiable and there exists a constant L > 0 such that for all θ1, θ2 ∈ Rd, we have φ(θ2) ≤ φ(θ1)+〈∇φ(θ1), θ2−θ1〉+ L2 ‖θ2−θ1‖ 2
Definition 1 (Biased gradient oracle) A map g : Rd × D → Rd, such that g(θ, ξ) = ∇φ(θ) + b(θ, ξ) + n(θ, ξ) for a bias b : Rd → Rd and zero-mean noise n : Rd × D → Rd, that is Eξn(θ, ξ) = 0.
Compared to standard stochastic gradient oracle, the above definition introduces the bias term b. In noisy-label settings, the b is generated by the data with corrupted labels.
Assumption 3 (σ-Bounded noise) There exists constants σ > 0, such that Eξ‖n(θ, ξ)‖2 ≤ σ, ∀θ ∈ Rd
Assumption 4 (ζ-Bounded bias) There exists constants ζ > 0, such that for any ξ, we have ‖b(θ, ξ)‖2 ≤ ζ2, ∀θ ∈ Rd
For simplicity, assume the learning rate is constant γ, then in every iteration, the biased SGD performs update θt+1 ← θt − γtg(θt, ξ). Then the following theorem showed the gradient norm convergence with biased SGD.
Theorem 4 (Convergence of Biased SGD(formal)) Under assumptions 2, 3, 4, define F =
φ(θ0)− φ∗and step size γ = min
{ 1
L , (
√ LF
σT )
} , denote the desired accuracy as k, then
T = O ( 1
k + σ2 k2 ) iterations are sufficient to obtain mint∈[T ] E‖∇φ(θt)‖2 = O(k + ζ2).
Remark 2 Let k = ζ2, T = O (
1 ζ2 +
σ2 ζ4 ) iterations is sufficient to get mint∈[T ] E‖∇φ(θt)‖2 =
O(ζ2), and performing more iterations does not improve the accuracy in terms of convergence.
Since this is a standard results, similar results are showed in Bernstein et al. (2018); Devolder et al. (2014); Hu et al. (2020); Ajalloeian & Stich (2020). we provide the proof here. Proof: by L-smooth, we have:
φ(θ2) ≤ φ(θ1) + 〈∇φ(θ1), θ2 − θ1〉+ L
2 ‖θ2 − θ1‖2
by using γ ≤ 1 L , we have Eφ (θ1t+1) ≤ φ (θ1t)− γ 〈∇φ (θ1t) ,Egt〉+ γ2L
2
( E ‖gt − Egt‖2 + E ‖Egt‖2 ) = φ (θ1t)− γ 〈∇φ (θ1t) ,∇φ (θ1t) + bt〉+ γ2L
2
( E ‖nt‖2 + E ‖∇φ (θ1t) + bt‖ 2 )
≤ φ (θ1t) + γ
2
( −2 〈∇φ (θ1t) ,∇φ (θ1t) + bt〉+ ‖∇φ (θ1t) + bt‖ 2 ) + γ2L
2 E ‖nt‖2
= φ (θ1t) + γ
2
( −‖∇φ (θ1t)‖ 2 + ‖bt‖2 ) + γ2L
2 E ‖nt‖2
Since we have ‖bt‖2 ≤ ζ2, ‖nt‖2 ≤ σ2, by plug in the learning rate constraint, we have
Eφ (θ1t+1) ≤ φ (θ1t)− γ
2 ‖∇φ (θ1t)‖
2 + γ
2 ζ2 +
γ2L
2 σ2
Eφ (θ1t+1)− φ (θ1t) ≤ − γ
2 ‖∇φ (θ1t)‖
2 + γ
2 ζ2 +
γ2L
2 σ2
Then, removing the gradient norm to left hand side, and sum it across different iterations, we could get
1
2T T−1∑ t=0 E‖φ (θ1t) ‖ ≤ F Tγ + ζ2 2 + γLσ2 2
Take the minimum respect to t and substitute the learning rate condition will directly get the results.
A.9 PROOF OF THEOREM 2
Denote G̃ to be the set of corrupted minibatch, G to be the set of original clean minibatch and we have |G| = |G̃| = m. Let N to be the set of remaining data and according to our algorithm, the remaining data has the size |N| = n = (1 − )m. Define A to be the set of individual clean gradient, which is not discarded by algorithm 1. B to be the set of individual corrupted gradient, which is not discarded. According to our definition, we have N = A ∪ B. AD to be the set of individual good gradient, which is discarded, AR to be the set of individual good gradient, which is replaced by corrupted data. We have G = A∪AD∪AR. BD is the set of individual corrupted gradient, which is discarded by our algorithm. Denote the good gradient to be gi = αiWi, and the bad gradient to be g̃i, according to our assumption, we have ‖g̃i‖ ≤ L. Now, we have the l2 norm error:
‖µ(G)− µ(N)‖ = ‖ 1 m m∑ i∈G gi −
( 1
n ∑ i∈A gi + 1 n ∑ i∈B g̃i
) ‖
= ‖ 1 n m∑ i=1 n m gi −
( 1
n ∑ i∈A gi + 1 n ∑ i∈B g̃i
) ‖
= ‖ 1 n ∑ i∈A n m gi + 1 n ∑ i∈AD n m gi + 1 n ∑ i∈AR n m gi −
( 1
n ∑ i∈A gi + 1 n ∑ i∈B g̃i
) ‖
= ‖ 1 n ∑ i∈A ( n−m m )gi + 1 n ∑ i∈AD n m gi + 1 n ∑ i∈AR n m gi − 1 n ∑ i∈B g̃i‖
≤ ‖ 1 n ∑ i∈A ( n−m m )gi + 1 n ∑ i∈AD n m gi + 1 n ∑ i∈AR n m gi‖+ ‖ 1 n ∑ i∈B g̃i‖
≤ ‖ ∑ A m− n nm gi + ∑ AD 1 m gi + ∑ AR 1 m gi‖+ ∑ B 1 n ‖g̃i‖
≤ ∑ A ‖m− n nm gi‖+ ∑ AD ‖ 1 m gi‖+ ∑ AR ‖ 1 m gi‖+ ∑ B 1 n ‖g̃i‖
By using the filtering algorithm, we could guarantee that ‖g̃i‖ ≤ L. Let |A| = x, we have |B| = n− x = (1− )m− x, |AR| = m− n = m, |AD| = m− |A| − |AR| = m− x− (m− n) = n− x = (1− )m− x. Thus, we have:
‖µ(G)− µ(N)‖ ≤ xm− n nm L+ (n− x) 1 m L+ (m− n) 1 m L+ (n− x) 1 n L
≤ x(m− n nm − 1 m )L+ n 1 m L+ (m− n) 1 m L+ (n− x) 1 n L = 1
m ( 2 − 1 1− )xL+ L+ L− 1 n xL
= xL( 2 − 2 n ) + 2L
To minimize the upper bound, we need x to be as small as possible since 2 − 2 < 1. According to our problem setting, we have x = n−m ≤ (1− 2 )m, substitute back we have:
‖µ(G)− µ(N)‖ ≤ (1− 2 )Lm(2 − 2 n ) + 2L
= 1− 2 1− 2L+ 2L
= 4L− 1− 2L
Since < 0.5, we use tylor expansion on 1− , by ignoring the high-order terms, we have
‖µ(G)− µ(N)‖ = O( L)
Note, if the Lipschitz continuous assumption does not hold, then L should be dimension dependent.
A.10 PROOF OF RANDOMIZED FILTERING ALGORITHM
Lemma 3 (Gradient Estimation Error for Randomized Filtering) Given a corrupted matrix G̃ ∈ Rm×d generated in problem 2. Let G ∈ Rm×d be the original clean gradient matrix. Suppose we are arbitrary select n = (1− )m rows from G̃ to get remaining set N ∈ Rn×d. Let µ to be the empirical mean function, assume the clean gradient before loss layer has bounded operator norm: ‖W‖op ≤ C, the maximum clean gradient in loss layer maxi ‖αi‖ = k, the maximum corrupted gradient in loss layer maxi ‖δi‖ = v, assume < 0.5, then we have:
‖µ(G)− µ(N)‖ ≤ Ck 3 − 4 2
1− + Cv 1−
A.10.1 PROOF OF LEMMA 3
Denote G̃ to be the set of corrupted minibatch, G to be the set of original clean minibatch and we have |G| = |G̃| = m. Let N to be the set of remaining data and according to our algorithm, the remaining data has the size |N| = n = (1 − )m. Define A to be the set of individual clean gradient, which is not discarded by algorithm 3. B to be the set of individual corrupted gradient, which is not discarded. According to our definition, we have N = A ∪ B. AD to be the set of individual good gradient, which is discarded, AR to be the set of individual good gradient, which is replaced by corrupted data. We have G = A∪AD∪AR. BD is the set of individual corrupted gradient, which is discarded by our algorithm. Denote the good gradient to be gi = αiWi, and the bad gradient to be g̃i = δiWi, according to our assumption, we have ‖Wi‖op ≤ C.
Now, we have the l2 norm error:
‖µ(G)− µ(N)‖ = ‖ 1 m m∑ i∈G gi −
( 1
n ∑ i∈A gi + 1 n ∑ i∈B g̃i
) ‖
= ‖ 1 n m∑ i=1 n m gi −
( 1
n ∑ i∈A gi + 1 n ∑ i∈B g̃i
) ‖
= ‖ 1 n ∑ i∈A n m gi + 1 n ∑ i∈AD n m gi + 1 n ∑ i∈AR n m gi −
( 1
n ∑ i∈A gi + 1 n ∑ i∈B g̃i
) ‖
= ‖ 1 n ∑ i∈A ( n−m m )gi + 1 n ∑ i∈AD n m gi + 1 n ∑ i∈AR n m gi − 1 n ∑ i∈B g̃i‖
≤ ‖ 1 n ∑ i∈A ( n−m m )gi + 1 n ∑ i∈AD n m gi + 1 n ∑ i∈AR n m gi‖+ ‖ 1 n ∑ i∈B g̃i‖ (1)
Let |A| = x, we have |B| = n−x = (1− )m−x, |AR| = m−n = m, |AD| = m−|A|−|AR| = m− x− (m− n) = n− x = (1− )m− x. Thus, we have:
‖µ(G)− µ(N)‖ ≤ ‖ ∑ A m− n nm gi + ∑ AD 1 m gi + ∑ AR 1 m gi‖+ ∑ B 1 n ‖g̃i‖
≤ ∑ A ‖m− n nm gi‖+ ∑ AD ‖ 1 m gi‖+ ∑ AR ‖ 1 m gi‖+ ∑ B 1 n ‖g̃i‖
For individual gradient, according to the label corruption gradient definition in problem 2, assuming the ‖W‖op ≤ C, we have ‖gi‖ ≤ ‖αi‖‖Wi‖op ≤ C‖αi‖. Also, denote maxi ‖αi‖ = k, maxi ‖δi‖ = v, we have ‖gi‖ ≤ Ck, ‖g̃i‖ ≤ Cv.
‖µ(G)− µ(N)‖ ≤ Cxm− n nm k + C(n− x) 1 m k + C(m− n) 1 m k + C(n− x) 1 n v
Note the above upper bound holds for any x, thus, we would like to get the minimum of the upper bound respect to x. Rearrange the term, we have
‖µ(G)− µ(N)‖ ≤ Cx(m− n nm − 1 m )k + Cn 1 m k + C(m− n) 1 m k + C(n− x) 1 n v
= C 1 m ( 2 − 1 1− )xk + Ck + Cv − 1 n Cxv
= Cx ( k(2 − 1) m(1− ) − v n ) + Ck + Cv
= Cx ( k(2 − 1)− v m(1− ) ) + Ck + Cv
Since when < 0.5, k(2 − 1)− v m(1− ) < 0, we knew that x should be as small as possible to continue the bound. According to our algorithm, we knew n −m = m(1 − ) −m = (1 − 2 )m ≤ x ≤ n = (1− )m. Then, substitute x = (1− 2 )m, we have
‖µ(G)− µ(N)‖ ≤ Ck(1− 2 )2 − 1 1− + Ck + Cv − Cv 1− 2 1−
= Ck 3 − 4 2
1− + Cv 1−
A.11 PROOF OF THEOREM 3
According to algorithm3, we could guarantee that v ≤ k, then, we will have:
‖µ(G)− µ(N)‖ ≤ Ck 3 − 4 2
1− + Cv 1−
≤ Ck 4 − 4 2 1− = 4 Ck = O( √q)(C is constant, k is the norm of q-dimensional vector)
A.12 COMPARISON BETWEEN SORTING LOSS LAYER GRADIENT NORM AND SORTING THE
LOSS VALUE
Assume we have a d class label y ∈ Rd, where yk = 1, yi = 0, i 6= k. We have two prediction p ∈ Rd, q ∈ Rd. Assume we have a d class label y ∈ Rd, where yk = 1, yi = 0, i 6= k. With little abuse of notation, suppose we have two prediction p ∈ Rd, q ∈ Rd. Without loss of generality, we could assume that p1 has smaller cross entropy loss, which indicates pk ≥ qk For MSE, assume we have opposite result
‖p− y‖2 ≥ ‖q− y‖2 ⇒ ∑ i 6=k p2i + (1− pk)2 ≥ ∑ i6=k q2i + (1− qk)2 (2)
For each pi, i 6= k, We have
V ar(pi) = E(p 2 i )− E(pi)2 =
1 d− 1 ∑ i 6=k p2i − 1 (d− 1)2 (1− pk)2 (3)
Then ∑ i 6=k p2i + (1− pk)2 ≥ ∑ i 6=k q2i + (1− qk)2
⇒V ari 6=k(pi) + d
(d− 1)2 (1− pk)2 ≥ V ari 6=k(qi) +
d
(d− 1)2 (1− qk)2
⇒V ari 6=k(pi)− V ari 6=k(qi) ≥ d (d− 1)2 ( (1− qk)2 − (1− pk)2 ) ⇒V ari 6=k(pi)− V ari 6=k(qi) ≥ d
(d− 1)2 ((pk − qk)(2− pk − qk))
(4) | 1. What are the strengths and weaknesses of the paper regarding its contributions to training neural networks under data poisoning?
2. What are the concerns regarding the error bounds provided by the authors?
3. Are there any questions about the absence or lack of formal proofs for certain theorems and lemmas?
4. Do you have any concerns about the experimental evaluation presented in the paper?
5. How does the reviewer assess the overall quality and completeness of the paper's content? | Review | Review
In this paper, the authors studied the problem of training neural networks under data poisoning, i.e., when a small fraction of the training data is corrupted by the adversary. They considered two data corruption settings, one allows both the data x and supervision y to be corrupted, which is called general corruption, and one with only supervision y corrupted. Their first algorithm, which removes the datapoints whose gradient norm is large when computing the average gradient, applies to the general supervision setting. They showed their algorithm has eps\sqrt(d) error or epsL error, which can be quite large for high-dimensional and deep neural nets learning settings. Their second algorithm applies to the setting where only supervision y is corrupted, and the algorithm works by removing the datapoints whose output layer gradient is large. Assuming the clean data has bounded gradient, and the dimension of y is p, their algorithm achieves error epssqrt(p).
Weakness: 1.The authors claimed that compared to Diakonikolas 19, they improved the error from eps to sqrt(eps). However, the eps result relies on the fact that the gradient of good data has bounded norm, and I believe in that setting Diakonikolas 19 also achieves eps error. 2. In paragraphs close to Lemma 1 and Lemma 3, the authors mentioned a randomized filtering algorithm, and proved Lemma 1 Lemma 3 for the algorithm. However, I can’t find the mentioned randomized filtering algorithm in the paper. 3. Theorem 1 and Theorem 4 have no formal proof. 4. Theorem 2 has no proof. 5. In the experiment section, there is no comparison to other state-of-the-art algorithms, for example Diakonikolas 19.
Overall, I think the theoretical result in the paper is incomplete, and the experimental evaluation is insufficient. |
ICLR | Title
Provable Robust Learning for Deep Neural Networks under Agnostic Corrupted Supervision
Abstract
Training deep neural models in the presence of corrupted supervisions is challenging as the corrupted data points may significantly impact the generalization performance. To alleviate this problem, we present an efficient robust algorithm that achieves strong guarantees without any assumption on the type of corruption, and provides a unified framework for both classification and regression problems. Different from many existing approaches that quantify the quality of individual data points (e.g., loss values) and filter out data points accordingly, the proposed algorithm focuses on controlling the collective impact of data points on the averaged gradient. Even when a corrupted data point failed to be excluded by the proposed algorithm, the data point will have very limited impact on the overall loss, as compared with state-of-the-art filtering data points based on loss values. Extensive empirical results on multiple benchmark datasets have demonstrated the robustness of the proposed method under different types of corruptions.
1 INTRODUCTION
Corrupted supervision is a common issue in real-world learning tasks, where the learning targets are not accurate due to various factors in the data collection process. In deep learning models, such corruptions are especially severe, whose degree-of-freedom makes them easily memorize corrected examples and susceptible to overfitting (Zhang et al., 2016).
There are extensive efforts to achieve robustness against corrupted supervisions. A natural approach to deal with corrupted supervision in deep neural networks (DNNs) is to reduce the model exposure to corrupted data points during training. By detecting and filtering (or re-weighting) the possible corrupted samples, the learning is expected to deliver a model that is similar to the one trained on clean data (without corruption) (Kumar et al., 2010; Han et al., 2018; Zheng et al., 2020). There are different criteria designed to identify the corrupted data points in training. For example, Kumar et al. (2010); Han et al. (2018); Jiang et al. (2018) leveraged the loss function values of data points; Zheng et al. (2020) tapped prediction uncertainty for filtering data; Malach & Shalev-Shwartz (2017) used the disagreement between two deep networks; Reed et al. (2014) utilized the prediction consistency of neighboring iterations. The success of these methods highly depends on the effectiveness of the detection criteria in correctly identifying the corrupted data points. Since the corrupted labels remain unknown throughout the learning, such “unsupervised” detection approaches may not be effective, either lack theoretical guarantees of robustness (Han et al., 2018; Reed et al., 2014; Malach & Shalev-Shwartz, 2017; Li et al., 2017) or provide guarantees under assumptions of the availability of prior knowledge about the type of corruption (Zheng et al., 2020; Shah et al., 2020; Patrini et al., 2017; Yi & Wu, 2019). Besides, another limitation of many existing approaches is that, they are exclusively designed for classification problems (e.g., Malach & Shalev-Shwartz (2017); Reed et al. (2014); Menon et al. (2019); Zheng et al. (2020)) and are not straightforward to extend to solve regression problems.
To tackle these challenges, this paper presents a unified optimization framework with robustness guarantees without any assumptions on how supervisions are corrupted, and is applicable to both classification and regression problems. Instead of developing an accurate criterion for detection corrupted samples, we adopt a novel perspective and focus on limiting the collective impact of corrupted samples during the learning process through robust mean estimation of gradients. Specifically, if our estimated average gradient is close to the gradient from the clean data during the learning iterations,
then the final model will be close to the model trained on clean data. As such, a corrupted data point can still be used during the training when it does not considerably alter the averaged gradient. This observation has remarkably impact on our algorithm design: instead of explicitly quantifying (and identifying) individual corrupted data points, which is a hard problem in itself, we are now dealing with an easier task, i.e., eliminating training data points that significantly distort the mean gradient estimation. One immediate consequence of this design is that, even when a corrupted data point failed to be excluded by the proposed algorithm, the data point is likely to have very limited impact on the overall loss, as compared with state-of-the-art filtering data points based on loss values. We perform experiments on both regression and classification with corrupted supervision on multiple benchmark datasets. The results show that the proposed method outperforms state-of-the-art.
2 BACKGROUND
Learning from corrupted data (Huber, 1992) has attracted considerable attention in the machine learning community (Natarajan et al., 2013). Many recent studies have investigated robustness of classification tasks with noisy labels. For example, Kumar et al. (2010) proposed a self-paced learning (SPL) approach, which assigns higher weights to examples with smaller loss. A similar idea was used in curriculum learning (Bengio et al., 2009), in which the model learns easy samples first before learning harder ones. Alternative methods inspired by SPL include learning the data weights (Jiang et al., 2018) and collaborative learning (Han et al., 2018; Yu et al., 2019). Label correction (Patrini et al., 2017; Li et al., 2017; Yi & Wu, 2019) is another approach, which revises original labels in data with a goal to recover clean labels from corrupt ones. However, since we do not have access to which data points are corrupted, it is hard to get provable guarantees for label correction without strong assumptions regarding the corruption type.
Accurate estimation of gradients is a key step for successful optimization. The relationship between gradient estimation and its final convergence has been widely studied in the optimization community. Since computing an approximated (and potentially biased) gradient is often more efficient than computing the exact gradient, many studies used approximated gradients to optimize their models and showed that they suffer from the biased estimation problem if there is no assumptions on the gradient estimation (d’Aspremont, 2008; Schmidt et al., 2011; Bernstein et al., 2018; Hu et al., 2020; Ajalloeian & Stich, 2020).
A closely related topic is robust estimation of the mean. Given corrupted data, robust mean estimation aims at generating an estimated mean µ̂ such that the difference between the estimated mean on corrupted data and the mean of clean data ‖µ̂− µ‖2 is minimized. It was showed that median or trimmed-mean are the optimal statistics for mean estimation in one-dimensional data (Huber, 1992). However, robustness in high dimension is quite challenging since applying the coordinate-wise optimal robust estimator would lead to an error factor O( √ d) that scales with the data dimension. Although some classical work, such as Tukey median (Tukey, 1975), successfully designed algorithms to get rid of the O( √ d) error, the algorithms themselves are not polynomial-time algorithm. More recently, Diakonikolas et al. (2016); Lai et al. (2016) successfully designed polynomial-time algorithms with dimension-free error bounds. The results have been widely applied to improve algorithmic efficiency in various scenarios (Dong et al., 2019; Cheng et al., 2020).
Robust optimization aims to optimize the model given corrupted data. Many previous studies improve the robustness of the optimization in different problem settings. However, most of them either study linear regression and its variantes(Bhatia et al., 2015; 2017; Shen & Sanghavi, 2019) or study the convex optimization (Prasad et al., 2018). Thus, those results cannot be directly generalized to deep neural networks. Diakonikolas et al. (2019) is a very generalized non-convex optimization method with the agnostic corruption guarantee. However, the space complexity of the algorithm is high, thus cannot be applied to deep neural networks given current hardware limitations.
3 METHODOLOGY
Before introducing our algorithm, we first discuss the corrupted supervision. To characterize agnostic corruptions, we make use of an adversary that tries to corrupt the supervision of a clean data. There is no limitation on how the adversary corrupts the supervision, which can either be randomly permuting the target, or in a way that maximizes the negative impact (i.e., lower performance).
Firstly, the adversary can choose up to fraction of the clean target Dy ∈ Rn×q and change the selected row of Dy to arbitrary valid numbers, generating D y ∈ Rn×q . Then, the adversary returns the corrupted dataset Dx, D y to our learning algorithmA. In this process, the only constraint on the adversary is the fraction, and the adversary has full knowledge of the data, and even the learning algorithm A. A natural question to ask is: Given a data set with -fraction corrupted supervision Dx ∈ Rn×p, D y , and a learning objective φ : Rp × Rq × Rd → R parameterized by θ, can we output parameters θ ∈ Rd such that ‖∇θφ(θ;Dx,Dy)‖ is minimized. When = 0, we have D y = Dy and learning is done on the clean data. The stochastic gradient descent could converge to a stationary point, where ‖∇θφ(θ;Dx,Dy)‖ = 0. However, when the supervision is corrupted as above, this is not the case any more, due to the error in θ impacted by the corrupted data. We thus want an efficient algorithm to find a model θ that minimizes ‖∇θφ(θ;Dx,Dy)‖. A robust model θ should have a small value of ‖∇θφ(θ;Dx,Dy)‖, and we hypothesize that a smaller ‖∇θφ(θ;Dx,Dy)‖ has better generalization.
3.1 STOCHASTIC GRADIENT DESCENT WITH BIASED GRADIENT
A direct consequence of corrupted supervision is biased gradient estimation. In this section, we will first analyze how such biased gradient estimation affects the robustness of learning. The classical analysis of stochastic gradient descent (SGD) requires access to the stochastic gradient oracle, which is an unbiased estimation of the true gradient. However, corrupted supervision leads to corrupted gradients, and it is thus difficult to get unbiased gradient estimation without assumptions of how the gradients are corrupted. We start the analysis by the following informal theorem (without elaborated discussions of assumptions) of how biased gradient affects the final convergence of SGD. Its formal version is provided in Theorem 4, Appendix.
Theorem 1 (Convergence of Biased SGD (Informal)) Under mild assumptions, denote ζ to be the maximum `2 norm of the difference between clean minibatch gradient and corrupted minibatch gradient ‖g− g̃‖ ≤ ζ, then by using biased gradient estimation, SGD converges to the ζ-approximated stationary points: E‖∇φ(θt)‖2 = O(ζ2). Remark 1 In the corrupted supervision setting, let the gradient estimated by corrupted data D be ĝ, the gradient estimated by clean data D be g. Assume ‖g̃ − g‖ ≤ ζ, it follows that when using corrupted dataset in SGD, it converges to the ζ-approximated stationary point of the objective defined by the clean data. Note the difference between above theorem and typical convergence theorem is that we are using a biased gradient estimation.
According to Theorem 1 and the remark, a robust estimation of the gradient g is the key to ensure a robust model (converge to the clean solution). We also assume the loss function has the form of L(y, ŷ), where many commonly used loss functions fall in this category.
3.2 ROBUST GRADIENT ESTIMATION FOR GENERAL DATA CORRUPTION
We first introduce Algo. 2 for general corruption (i.e. corruption on both features and/or supervisions). The algorithm excludes the data points with large gradient norms, and uses the empirical mean of the remaining to update gradients. In Thm. 2 we give its robustness property.
Algorithm 1: Robust Mean Estimation for Corrupted Gradient input: gradient matrix G ∈ m× d, corruption rate return estimated mean µ̂ ∈ Rd ; 1. For each row zi in G, calculate the l2 norm ‖zi‖ 2. Choose the -fraction rows with large ‖zi‖ 3. Remove those selected rows, and return the empirical mean of the rest points as µ̂.
Assumption 1 (Individual L-smooth loss) For every individual loss function φi, there exists constant L > 0, such that for a clean sample i, we have |φi(x) − φi(y)| ≤ L|x − y| for any x,y.
Theorem 2 (Robust Gradient Estimation For Data Corruption) Let G̃ ∈ Rm×d be a corrupted gradient matrix, and G ∈ Rm×d be the clean gradient matrix. Let µ be the empirical mean function,
Algorithm 2: (PRL(G)) Provable Robust Learning for General Corrupted Data input: Label corrupted dataset Dx,D y , learning rate γt; return model parameter θ; for t = 1 to maxiter do
Randomly sample a minibatch M from Dx,D y Calculate the individual gradient G̃ for M Apply Algorithm 1 on G̃ to get robust gradient estimation µ̂ Update model θt+1 = θt − γtµ̂
end we have that the output of Algo. 1 µ̂ of G̃ satisfies ‖µ(G) − µ̂‖ = O( √ d). Moreover, if Asm. 1 holds, we further have ‖µ(G)− µ̂‖ = O( L).
Combining with the aforementioned convergence analysis of biased SGD, we get the following:
Corollary 1 (Robust Optimization For Corrupted Data) Given assumptions used in Thm. 1, and Asm. 1, applying Algo. 1 to any -fraction corrupted data, we get mint∈[T ] E‖∇φ(xt)‖ = O( L) with large enough T . If Asm. 1 does not hold, then we get mint∈[T ] E‖∇φ(xt)‖ = O( √ d) with large enough T .
The robustness guarantee states that even training on generally corrupted data (corrupted supervision is a special case), Algo. 2 guarantee that the gradient norm on remaining data cannot be too large. Since Thm. 2 gives a dimension-free error bound when Asm. 1 holds, Corollary 1 also gives the dimension-free robustness guarantee with Asm. 1. We defer the detailed discussion ofO( L) to later sections. Although the error bound O( L) sounds good, we note that it still has several drawbacks: First, the dimension-free error bound means the error does not grow with increasing dimensions, and is critical when working with neural networks, due to the extremely large gradient dimension (i.e., #parameters of neural network). Thm. 2 gives the dimension-free error bound only when Asm. 1 holds, which is quite strong. In addition, even when Asm. 1 holds, L can be large, leading to a large gradient estimation error. Existing work (Diakonikolas et al., 2019) already acheives the dimensionfreeO( √ ) guarantee with general corruptions, which is a much more better theoretical results than above theorem. However, in practice, we found that the gradient norms of deep neural networks for individual data points are usually not very large, even at the beginning of the training. This can be partially due to the network structure. Further discussion on this issue is beyond the scope of this paper, but the theoretical bound above states that the robustness should depend on the number of parameters for the general models.
Another concern of Alg. 2 is the efficiency. It requires computing individual gradients. Although there are some advanced approaches to get the individual gradient, e.g., (Goodfellow, 2015), it is still relatively slow as compared to commonly used back-propagation. Moreover, these methods are usually not compatible with popular components such as batch normalization (BN) since the individual gradients are not independent inside BN, using of which will lose the benefits from parallelization.
3.3 ROBUST GRADIENT ESTIMATION FOR ONE DIMENSIONAL CORRUPTED SUPERVISION
In this section, we show that the above robustness bound can be improved if we assume the corruption only comes from supervision. Also, by fully exploiting the gradient structure of the corrupted supervision, our algorithm can be much more efficient and meanwhile compatible with batch normalization. We use the one dimensional supervision setting (binary classification or single-target regression) to illustrate this intuition and extend it more general settings in the next section. Consider a high-dimensional supervised learning problem with X ∈ Rn×p and y ∈ Rn. The goal is to learn a function f parameterized by θ ∈ Rd minimizing the following loss minθ ∑n i=1 φi =
minθ ∑n i=1 L(yi, f(xi, θ)). The gradient for a data point i is ∇θφi = ∂li ∂fi ∂fi ∂θ = αigi.
One key observation is that: when only supervision is corrupted, then the corruption contributes only to the term αi = ∂li∂fi , which is a scalar in the one-dimensional setting. In other words, given the clean gradient of ith point, gi ∈ Rd, the corrupted supervision can only perturbs the the length of the gradient vector, changing the gradient from αigi to δigi, where δi =
∂l i ∂fi . When αi and δi are
known, then we can easily eliminate the impact from corrupted supervision. But this is not the case since we have have only the possibly corrupted target ŷi as opposed to the ground truth yi.
On the other hand, the fact that corrupted supervision scales the clean gradient can be used to reshape the robust optimization problem. Recall that in every iteration, we update our model by θ+ = θ− γµ(G), where µ denotes the empirical mean function and G = [∇θφT1 , . . . ,∇θφTm] ∈ Rm×d is the gradient matrix with mini-batch size m. We then have the following:
Problem 1 (Robust Gradient Estimation for Corrupted Supervision - One Dimensional Case) Given a clean gradient matrix G ∈ Rm×d, an -corrupted matrix G̃ with at most -fraction rows are corrupted from αigi to δigi, design an algorithmA : Rm×d → Rd that minimizes ‖µ(G)−A(G̃)‖.
Note that when ‖δi‖ is large, the corrupted gradient will have large effect on the empirical mean, and vice versa. This motivates us to develop an algorithm that filters out data points by the loss layer gradient ‖ ∂li∂fi ‖. If the norm of the loss layer gradient of a data point is large (in one-dimensional case, this gradient reduces to a scalar and the norm becomes its absolute value), we exclude the data point when computing the empirical mean of gradients for this iteration. Note that this algorithm is applicable to both regression and classification problems. Especially, when using the mean squared error (MSE) loss for regression, its gradient norm is exactly the loss itself, and the algorithm reduces to self-paced learning Kumar et al. (2010). We summarize the procedure in Alg. 3 and extend it to the more general multi-dimension case in the next section.
Algorithm 3: (PRL(L)) Efficient Provable Robust Learning for Corrupted Supervision input: dataset Dx,D y with corrupted supervision, learning rate γt; return model parameter θ; for t = 1 to maxiter do
Randomly sample a minibatch M from Dx,D y Compute the predicted label Ŷ from M Calculate the gradient norm for the loss layer, (i.e. ‖ŷ − y‖ for mean square error or cross entropy)
for each data point in M Remove the top τ -fraction data from M according to ‖ŷ − y‖ Return the empirical mean of the remaining M as the robust mean estimation µ̂ Update model θt+1 = θt − γtµ̂
end
3.4 EXTENSION TO MULTI-DIMENSIONAL CORRUPTED SUPERVISION
To extend our algorithm and analysis to multi-dimensional case, let q to be the supervision dimension, the gradient for each data point is ∇θφi = ∂li∂fi ∂fi ∂θ , where ∂li ∂fi ∈ Rq is the gradient of loss respect to model outputs, and ∂fi∂θ ∈ R q×d is the gradient of model outputs respect to model parameters. Similarly, when the supervision is corrupted, the corruption comes from the term ∂li∂fi , which is a vector. Let δi = ∂l i ∂fi ∈ Rq , αi = ∂li∂fi ∈ R q , Wi = ∂fi∂θ ∈ R q×d, m be the minibatch size. Denote the clean gradient matrix G ∈ Rm×d, where the ith row of gradient matrix gi = αiWi. Now the multi-dimensional robust gradient estimation problem is defined by:
Problem 2 (Robust Gradient Estimation for Corrupted Supervision - Multi-Dimensional Case) Given a clean gradient matrix G, an -corrupted matrix G̃ with at most -fraction rows are corrupted from αiWi to δiWi, design an algorithmA : Rm×d → Rd that minimizes ‖µ(G)−A(G̃)‖.
We start our analysis by investigating the effects of the filtering-base algorithm, i.e. use the empirical mean gradient of (1 − )-fraction subset to estimate the empirical mean gradient of clean gradient matrix. We have the following for a randomized filtering-based algorithm(proof in Appendix):
Lemma 1 (Gradient Estimation Error for Random Dropping -fraction Data) Let G̃ ∈ Rm×d be a corrupted matrix generated as in Problem 2, and G ∈ Rm×d be the original clean gradient matrix. Suppose arbitrary (1− )-fraction rows are selected from G̃ to form the matrix N ∈ Rn×d. Let µ be the empirical mean function. Assume the clean gradient before loss layer has bounded
operator norm, i.e., ‖W‖op ≤ C, the maximum clean gradient in loss layer maxi∈G ‖αi‖ = k, the maximum corrupted gradient in loss layer maxi∈N ‖δi‖ = v, then we have:
‖µ(G)− µ(N)‖ ≤ Ck 3 − 4 2
1− + Cv 1− .
We see that v is the only term that is related to the corrupted supervision. If v is large, then the bound is not safe since the right-hand side can be arbitrarily large (i.e. an adversary can change the label in a way such that v is extremely large). Thus controlling the magnitude of v provides a way to effectively reduce the bound. For example, if we manage to control v ≤ k, then the bound is safe. This can be achieved by sorting the gradient norms at the loss layer, and then discarding the largest -fraction data points. We thus have the following result.
Theorem 3 (Robust Gradient Estimation For Supervision Corruption) Let G̃ be a corrupted matrix generated in Problem 2, q be the label dimension, µ be the empirical mean of clean matrix G. Assume the maximum clean gradient before loss layer has bounded operator norm: ‖W‖op ≤ C, then the output of gradient estimation in Algo 3 µ̂ satisfies ‖µ− µ̂‖ = O( √q) ≈ O( ).
Compare Thm. 2 and Thm. 3, we see that when the corruption only comes from supervision, the dependence on d is reduced to q, where in most deep learning cases we have d n. Applying Thm 1 directly shows that our algorithm is also robust in multi-label settings.
3.5 COMPARISON WITH DIAKONIKOLAS ET AL. (2019) AND OTHER METHODS
SEVER (Diakonikolas et al., 2019) showed promising state-of-the-art theoretical results in general corruptions, which achievesO( √ ) dimension-free guarantee for general corruptions. Compared to Diakonikolas et al. (2019), we have two contributions: a). By assuming the corruption comes from the label (we admit that this is quite strong compared to the general corruption setting), we could get a better error rate. b). Our algorithm can be scaled to deep neural networks while Diakonikolas et al. (2019) cannot. We think this is a contribution considering the DNN based models are currently state-of-the-art methods for noisy label learning problems (at least in empirical performance).
Although Diakonikolas et al. (2019) achieves very nice theoretical results, unfortunately, it cannot be applied to DNN with the current best hardware configuration. Diakonikolas et al. (2019) uses dimension-free robust mean estimation breakthroughs to design the learning algorithm, while we notice that most robust mean estimation relies on filtering out data by computing the score of projection to the maximum singular vector. For example, in Diakonikolas et al. (2019), it requires performing SVD on n×d individual gradient matrix, where n is the sample size and d is the number of parameters. This method works well for small datasets and small models since both n and d is small enough for current memory limitation. However, for deep neural networks, this matrix size is far beyond current GPU memory capability. That could be the potential reason why in Diakonikolas et al. (2019), only ridge regression and SVM results for small data are shown (we are not saying that they should provide DNN results). In our experiment, our n is 60000 and d is in the magnitude of millions (network parameters). It is impractical to store 60000 copies of neural networks in a single GPU card. In contrast, in our algorithm, we do not need to store the full gradient matrix. By only considering the loss-layer gradient norm, we can easily extend our algorithm to DNN, and we showed that this simple strategy works well in both theory and challenging empirical tasks.
We notice that there are some linear (Bhatia et al., 2015; 2017) or convex method (Prasad et al., 2018) achieves the better robustness guarantee. However, most of them cannot be directly applied to deep neural networks.
4 RELATIONSHIP TO SELF-PACED LEARNING (SPL)
SPL looks very similar to our method at first glance. Instead of keeping data point with small gradient norm, SPL tries to keep data with small loss. The gradient norm and loss function can be tied by the famous Polyak-Łojasiewicz (PL) condition. The PL condition assumes there exists some constant s > 0 such that 12‖∇φ(x)‖
2 ≥ s (φ(x)− φ∗) , ∀x holds. As we can see, when the neural network is highly over-parameterized, the φ∗ can be assumed to be equal across different
samples since neural networks can achieve 0 training loss (Zhang et al., 2016). By sorting the error φ(xi) for every data point, SPL actually is sorting the lower bound of the gradient norm if the PL condition holds. However, the ranking of gradient norm and the ranking loss can be very different since there is no guarantee that the gradient norm is monotonically increasing with the loss value. We provide illustration of why SPL is not robust from geometric perspective in the appendix. Here we show even for simple square loss, the monotonic relationship is easy to break. One easy counter-example is φ(x1, x2) = 0.5x21 + 50x 2 2. Take two points (1000, 1) and (495, - 49.5), we will find the monotonic relationship does not hold for these two points. Nocedal et al. (2002) showed that the monotonic relationship holds for square loss (i.e.φ(x) = 12 (x−x
∗)TQ(x− x∗) ) if the condition number of Q is smaller than 3 + 2 √ 2, which is a quite strong assumption especially when x is in high-dimension. If we consider the more general type of loss function (i.e. neural network), the assumptions on condition number should only be stronger, thus breaking the monotonic relationship. Thus, although SPL sorts the lower bound of the gradient norm under mild assumptions, our algorithm is significantly different from the proposed SPL and its variations.
Now, we discuss the relationship between SPL and algorithm 3 under supervision corruptions. SPL has the same form as algorithm 3 when we are using mean square error to perform regression tasks since the loss layer gradient norm is equal to loss itself. However, in classification, algorithm 3 is different from the SPL. In order to better understand the algorithm, we further analyze the difference between SPL and our algorithm for cross-entropy loss.
For cross entropy, denote the output logit as o, we have H(yi, fi) = −〈yi, log(softmax(oi))〉 = −〈yi, log(fi)〉. The gradient norm of cross entropy w.r.t. oi is: ∂Hi ∂oi
= yi− softmax(oi) = fi−yi. Thus, the gradient of loss layer is the MSE between yi and fi. Next, we investigate when MSE and Cross Entropy gives non-monotonic relationship. For the sake of simplification, we only study the sufficient condition of the non-monotonic relationship, which is showed in lemma 2.
Lemma 2 Let y ∈ Rq , where yk = 1,yi = 0 for i 6= k, and α, β are two q dimensional vector in probability simplex. Without loss of generality, suppose α has smaller cross entropy loss αk ≥ βk, then the sufficient condition for ‖α − y‖ ≥ ‖β − y‖ is Vari 6=k({αi}) − Vari 6=k({βi}) ≥
q (q−1)2 ((αk − βk)(2− αk − βk))
As αk ≥ βk, the right term is non-negative. In conclusion, when MSE generates a different result from cross-entropy, the variance of the probability of the non-true class of the discarded data point is larger. Suppose we have a ground-truth vector y = [0, 1, 0, 0, 0, 0, 0, 0, 0, 0], and we have two predictions α = [0.08, 0.28, 0.08, 0.08, 0.08, 0.08, 0.08, 0.08, 0.08, 0.08] and β = [0.1, 0.3, 0.34, 0.05, 0.05, 0.1, 0.03, 0.03, 0, 0]. The prediction α have a smaller mse loss while prediction β have a smaller cross-entropy loss. It is intuitive that β is more likely to be noisy data since it has two peak on the prediction (i.e. 0.3, 0.34). However, since cross entropy loss only considers one dimension, it cannot detect such situation. Compared to the cross-entropy, the gradient (mse loss) considers all dimension, and thus will consider the overall prediction distributions.
5 COMBINING WITH CO-TEACHING STYLE TRAINING
Motivated by co-teaching (Han et al., 2018), which is one of currently state-of-the-art deep methods for learning under noisy label, we propose Co-PRL(L), which has the same framework of coteaching but uses the loss-layer gradient to select the data. The full algorithm is shown in algorithm 4 in the appendix. The meaning of all hyper-parameters in algorithm 4 are all the same as in the original Han et al. (2018). Compared with algorithm 3, except sampling data according to the loss layer gradient norm, the Co-PRL(L) has two other modules. The first is we gradually increase the amount of the data to be dropped. The second is that two networks will exchange the selected data to update their own parameters.
6 EXPERIMENT
In this section, we perform experiments on benchmark regression and classification dataset. The code is available in supplementary materials of submission. We compare PRL(G)(Algo. 2), PRL(L)
(Algo. 3), and Co-PRL(L) (Algo. 4) to the following baselines. Standard: standard training without filtering data (mse for regression, cross entropy for classification); Normclip: standard training with norm clipping; Huber: standard training with huber loss (regression only); Decouple: decoupling network, update two networks by using their disagreement (Malach & Shalev-Shwartz, 2017) (classification only); Bootstrap: It uses a weighted combination of predicted and original labels as the correct labels, and then perform back propagation (Reed et al., 2014) (classification only); Min-sgd: choosing the smallest loss sample in minibatch to update model (Shah et al., 2020); SPL: self-paced learning, dropping the data with large losses (same as PRL(L) in regression setting with MSE loss); Ignormclip: clipping individual gradient then average them to update model (regression only); Co-teaching: collaboratively train a pair of SPL model and exchange selected data to another model(Han et al., 2018) (classification only); It is hard to design experiments for agnostic corrupted supervision and we tried our best to include different types of supervision noise. The supervision corruption settings are as follows: linadv: the corrupted supervision is generated by random wrong linear relationship of features (regression); signflip: the supervision sign is flipped (regression); uninoise: random sampling from uniform distribution as corrupted supervision (regression); mixture: mixture of above types of corruptions (regression); pairflip: shuffle the coordinates (i.e. eyes to mouth in celebA or cat to dog in CIFAR) (regression and classification); symmetric: randomly assign wrong class label (classification). For classification, we use classification accuracy as the evaluation metric, and R-square is used to evaluate regression experiments. Due to the limit of the space, we only show the average evaluation score on testing data for the last 10 epochs. The whole training curves are attached in the appendix. All experiments are repeated 5 times for regression experiments and 3 times for classification experiments.Main hyperparameters are showed in appendix.
6.1 REGRESSION EXPERIMENT
We use CelebA data to perform regression tasks. CelebA dataset has 162770 training images, 19867 validation images, 19962 test images. The target variable is ten-dimensional coordinates of the left eye, right eye, nose, left mouth, and right mouth. Given the human face image, the goal is to predict 10 face landmark coordinates in the image. We tried adding different types of noise on the landmark coordinates. We preprocess the CelebA data as following: we use three-layer CNN to train 162770 training images to predict clean coordinates (we use 19867 validation images to do the early stopping). Then, we use well-trained network to extract the 512-dimensional feature on testing sets. Thus, our final data to perform experiment has feature sets X ∈ R19962×512, and the target variable Y ∈ R19962×10. We further split the data to the training and testing set, where training sets contain 80% of the data. Then, we manually add linadv, signflip, uninoise, pairflip, mixture types of supervision noise on the target variable on training data. The corruption rate for all types of corruption is varied from 0.1 to 0.4. We use 3-layer fully connect networks in experiments. The results of averaged last 10 epoch r-square are in table 1.
6.2 CLASSIFICATION EXPERIMENT
We perform experiments on CIFAR10, and CIFAR100 to illustrate the effectiveness of our algorithm in classification setting. We use the 9-layer Convolutional Neural Network, which is the same as Han et al. (2018). Since most baselines include batch normalization, it is difficult to get individual gradient efficiently, we will drop the ignormclip and PRL baselines. In the appendix, we attached the results if both co-teaching and Co-PRL(L) drops batch normalization module. We will see that coteaching cannot maintain robustness while our method still has robustness. The reason is discussed in the appendix. We consider pairflip and symmetric supervision corruptions in experiments. Also, to compare with the current state of the art method, for symmetric noise, we use corruption rate which beyond 0.5. Although our theoretical analysis assumes the noise rate is small than 0.5, when the noise type is not adversary (i.e. symmetric), we empirically show that our method can also deal with such type of noise. Results on CIFAR10, CIFAR100 are in Table 2. As we can see, no matter using one network (PRL vs SPL) or two networks (Co-PRL(L) vs Co-teaching), our method performs significantly better. Since in real-world problems, it is hard to know that the ground-truth corruption rate, we also perform the sensitivity analysis in classification tasks to show the effect of overestimating and underestimating . The results are in Table 3. More discussion about sensitivity analysis can be found in appendix.
7 CONCLUSION
In this paper, we proposed efficient algorithm to defense against agnostic supervision corruptions. Both theoratical and empirical analysis showed the effectiveness of our algorithm. There are two remaining questions in this paper which deserves study in future. The first one is whether we can further improve O( ) error bound or show that O( ) is tight. The second one is to utilize more properties of neural networks, such as the sparse gradient, to see whether it is possible to get better algorithms.
A APPENDIX
A.1 CO-IGFILTER ALGORITHM
See algorithm 4.
Algorithm 4: Co-PRL(L) input: initialize wf and wg , learning rate η, fixed τ , epoch Tk and Tmax, iterations Nmax return model parameter wf and wg; for T = 1, 2, ..., Tmax do
for N = 1, ..., Nmax do random sample a minibatch M from Dx,D y (noisy dataset) get the predicted label Ŷf and Ŷg from M by wf . wg calculate the individual loss lf = L(Y, Ŷf ), lg = L(Y, Ŷg) calculate the gradient norm of loss layer scoref = ‖
∂lf ∂ŷf ‖, scoreg = ‖ ∂lg ∂ŷg ‖.
sample R(T )% small-loss-layer-gradient-norm instances by scoref and scoreg to get Nf , Ng update wf = wf − η∇wfL(Nf , wf ), wg = wg − η∇wgL(Ng, wg) (selected dataset) update model xt+1 = xt − γtµ̂
end Update R(T ) = 1−min { T
Tk τ, τ } end
A.2 FURTHER ILLUSTRATION OF THE DIFFERENCE BETWEEN SPL AND PRL(G)
In this section, we will further illustrate the difference between SPL and PRL(G). In order to have a more intuitive understanding of our algorithm, we could look at the Figure 1a and 1b. Since we are in the agnostic label corruption setting, it is difficult to filtering out the correct corrupted data. We showed two situations when loss filtering failed and gradient filtering failed. As we could see that when loss filtering method failed, the remaining corrupted data could have large impact on the overall loss surface while when gradient filtering method failed, the remaining corrupted data only have limited impact on the overall loss surface, thus gaining robustness.
A.3 NETWORKS AND HYPERPARAMETERS
The hyperparameters are in Table 4. For Classification, we use the same hyperparameters in Han et al. (2018). For CelebA, we use 3-layer fully connected network with 256 hidden nodes in hidden layer and leakly-relu as activation function. We also attached our code in supplementary materials.
A.4 REGRESSION R2 ON TESTING DATA CURVE
The curve for CelebA data is showed in Figure 2.
A.5 CLASSIFICATION CURVE
The classification curve is in Figure 3
A.6 SENSITIVITY ANALYSIS
Since in real-world problems, it is hard to know that the ground-truth corruption rate, we perform the sensitivity analysis in classification tasks to show the effect of . The results are in Table 5. As we could see, the performance is stable if we overestimate the corruption rate, this is because only
when we overestimate the , we could guarantee that the gradient norm of the remaining set is small. However, when we underestimate the corruption rate, in the worst case, there is no guarantee that the gradient norm of the remaining set is small. By using the empirical mean, even one large bad individual gradient would ruin the gradient estimation, and according to the convergence analysis of biased gradient descent, the final solution could be very bad in terms of clean data. That explains why to underestimate the corruption rate gives bad results. Also, from Table 5, we could see that using the ground truth corruption rate will lead to small uncertainty.
A.7 EMPIRICAL RESULTS ON RUNNING TIME
As we claimed in paper, the algorithm 2 is not efficient. In here we attached the execution time for one epoch for three different methods: Standard, PRL(G), PRL(L). For fair comparison, we replace all batch normalization module to group normalization for this comparison, since it is hard
to calculate individual gradient when using batch normalization. For PRL(G), we use opacus libarary (https://opacus.ai/) to calculate the individual gradient.
The results are showed in Table 6
A.8 PROOF OF CONVERGENCE OF BIASED SGD
We gave the proof of the theorem of how biased gradient affect the final convergence of SGD. We introduce several assumptions and definition first:
Assumption 2 (L-smoothness) The function φ: Rd → R is differentiable and there exists a constant L > 0 such that for all θ1, θ2 ∈ Rd, we have φ(θ2) ≤ φ(θ1)+〈∇φ(θ1), θ2−θ1〉+ L2 ‖θ2−θ1‖ 2
Definition 1 (Biased gradient oracle) A map g : Rd × D → Rd, such that g(θ, ξ) = ∇φ(θ) + b(θ, ξ) + n(θ, ξ) for a bias b : Rd → Rd and zero-mean noise n : Rd × D → Rd, that is Eξn(θ, ξ) = 0.
Compared to standard stochastic gradient oracle, the above definition introduces the bias term b. In noisy-label settings, the b is generated by the data with corrupted labels.
Assumption 3 (σ-Bounded noise) There exists constants σ > 0, such that Eξ‖n(θ, ξ)‖2 ≤ σ, ∀θ ∈ Rd
Assumption 4 (ζ-Bounded bias) There exists constants ζ > 0, such that for any ξ, we have ‖b(θ, ξ)‖2 ≤ ζ2, ∀θ ∈ Rd
For simplicity, assume the learning rate is constant γ, then in every iteration, the biased SGD performs update θt+1 ← θt − γtg(θt, ξ). Then the following theorem showed the gradient norm convergence with biased SGD.
Theorem 4 (Convergence of Biased SGD(formal)) Under assumptions 2, 3, 4, define F =
φ(θ0)− φ∗and step size γ = min
{ 1
L , (
√ LF
σT )
} , denote the desired accuracy as k, then
T = O ( 1
k + σ2 k2 ) iterations are sufficient to obtain mint∈[T ] E‖∇φ(θt)‖2 = O(k + ζ2).
Remark 2 Let k = ζ2, T = O (
1 ζ2 +
σ2 ζ4 ) iterations is sufficient to get mint∈[T ] E‖∇φ(θt)‖2 =
O(ζ2), and performing more iterations does not improve the accuracy in terms of convergence.
Since this is a standard results, similar results are showed in Bernstein et al. (2018); Devolder et al. (2014); Hu et al. (2020); Ajalloeian & Stich (2020). we provide the proof here. Proof: by L-smooth, we have:
φ(θ2) ≤ φ(θ1) + 〈∇φ(θ1), θ2 − θ1〉+ L
2 ‖θ2 − θ1‖2
by using γ ≤ 1 L , we have Eφ (θ1t+1) ≤ φ (θ1t)− γ 〈∇φ (θ1t) ,Egt〉+ γ2L
2
( E ‖gt − Egt‖2 + E ‖Egt‖2 ) = φ (θ1t)− γ 〈∇φ (θ1t) ,∇φ (θ1t) + bt〉+ γ2L
2
( E ‖nt‖2 + E ‖∇φ (θ1t) + bt‖ 2 )
≤ φ (θ1t) + γ
2
( −2 〈∇φ (θ1t) ,∇φ (θ1t) + bt〉+ ‖∇φ (θ1t) + bt‖ 2 ) + γ2L
2 E ‖nt‖2
= φ (θ1t) + γ
2
( −‖∇φ (θ1t)‖ 2 + ‖bt‖2 ) + γ2L
2 E ‖nt‖2
Since we have ‖bt‖2 ≤ ζ2, ‖nt‖2 ≤ σ2, by plug in the learning rate constraint, we have
Eφ (θ1t+1) ≤ φ (θ1t)− γ
2 ‖∇φ (θ1t)‖
2 + γ
2 ζ2 +
γ2L
2 σ2
Eφ (θ1t+1)− φ (θ1t) ≤ − γ
2 ‖∇φ (θ1t)‖
2 + γ
2 ζ2 +
γ2L
2 σ2
Then, removing the gradient norm to left hand side, and sum it across different iterations, we could get
1
2T T−1∑ t=0 E‖φ (θ1t) ‖ ≤ F Tγ + ζ2 2 + γLσ2 2
Take the minimum respect to t and substitute the learning rate condition will directly get the results.
A.9 PROOF OF THEOREM 2
Denote G̃ to be the set of corrupted minibatch, G to be the set of original clean minibatch and we have |G| = |G̃| = m. Let N to be the set of remaining data and according to our algorithm, the remaining data has the size |N| = n = (1 − )m. Define A to be the set of individual clean gradient, which is not discarded by algorithm 1. B to be the set of individual corrupted gradient, which is not discarded. According to our definition, we have N = A ∪ B. AD to be the set of individual good gradient, which is discarded, AR to be the set of individual good gradient, which is replaced by corrupted data. We have G = A∪AD∪AR. BD is the set of individual corrupted gradient, which is discarded by our algorithm. Denote the good gradient to be gi = αiWi, and the bad gradient to be g̃i, according to our assumption, we have ‖g̃i‖ ≤ L. Now, we have the l2 norm error:
‖µ(G)− µ(N)‖ = ‖ 1 m m∑ i∈G gi −
( 1
n ∑ i∈A gi + 1 n ∑ i∈B g̃i
) ‖
= ‖ 1 n m∑ i=1 n m gi −
( 1
n ∑ i∈A gi + 1 n ∑ i∈B g̃i
) ‖
= ‖ 1 n ∑ i∈A n m gi + 1 n ∑ i∈AD n m gi + 1 n ∑ i∈AR n m gi −
( 1
n ∑ i∈A gi + 1 n ∑ i∈B g̃i
) ‖
= ‖ 1 n ∑ i∈A ( n−m m )gi + 1 n ∑ i∈AD n m gi + 1 n ∑ i∈AR n m gi − 1 n ∑ i∈B g̃i‖
≤ ‖ 1 n ∑ i∈A ( n−m m )gi + 1 n ∑ i∈AD n m gi + 1 n ∑ i∈AR n m gi‖+ ‖ 1 n ∑ i∈B g̃i‖
≤ ‖ ∑ A m− n nm gi + ∑ AD 1 m gi + ∑ AR 1 m gi‖+ ∑ B 1 n ‖g̃i‖
≤ ∑ A ‖m− n nm gi‖+ ∑ AD ‖ 1 m gi‖+ ∑ AR ‖ 1 m gi‖+ ∑ B 1 n ‖g̃i‖
By using the filtering algorithm, we could guarantee that ‖g̃i‖ ≤ L. Let |A| = x, we have |B| = n− x = (1− )m− x, |AR| = m− n = m, |AD| = m− |A| − |AR| = m− x− (m− n) = n− x = (1− )m− x. Thus, we have:
‖µ(G)− µ(N)‖ ≤ xm− n nm L+ (n− x) 1 m L+ (m− n) 1 m L+ (n− x) 1 n L
≤ x(m− n nm − 1 m )L+ n 1 m L+ (m− n) 1 m L+ (n− x) 1 n L = 1
m ( 2 − 1 1− )xL+ L+ L− 1 n xL
= xL( 2 − 2 n ) + 2L
To minimize the upper bound, we need x to be as small as possible since 2 − 2 < 1. According to our problem setting, we have x = n−m ≤ (1− 2 )m, substitute back we have:
‖µ(G)− µ(N)‖ ≤ (1− 2 )Lm(2 − 2 n ) + 2L
= 1− 2 1− 2L+ 2L
= 4L− 1− 2L
Since < 0.5, we use tylor expansion on 1− , by ignoring the high-order terms, we have
‖µ(G)− µ(N)‖ = O( L)
Note, if the Lipschitz continuous assumption does not hold, then L should be dimension dependent.
A.10 PROOF OF RANDOMIZED FILTERING ALGORITHM
Lemma 3 (Gradient Estimation Error for Randomized Filtering) Given a corrupted matrix G̃ ∈ Rm×d generated in problem 2. Let G ∈ Rm×d be the original clean gradient matrix. Suppose we are arbitrary select n = (1− )m rows from G̃ to get remaining set N ∈ Rn×d. Let µ to be the empirical mean function, assume the clean gradient before loss layer has bounded operator norm: ‖W‖op ≤ C, the maximum clean gradient in loss layer maxi ‖αi‖ = k, the maximum corrupted gradient in loss layer maxi ‖δi‖ = v, assume < 0.5, then we have:
‖µ(G)− µ(N)‖ ≤ Ck 3 − 4 2
1− + Cv 1−
A.10.1 PROOF OF LEMMA 3
Denote G̃ to be the set of corrupted minibatch, G to be the set of original clean minibatch and we have |G| = |G̃| = m. Let N to be the set of remaining data and according to our algorithm, the remaining data has the size |N| = n = (1 − )m. Define A to be the set of individual clean gradient, which is not discarded by algorithm 3. B to be the set of individual corrupted gradient, which is not discarded. According to our definition, we have N = A ∪ B. AD to be the set of individual good gradient, which is discarded, AR to be the set of individual good gradient, which is replaced by corrupted data. We have G = A∪AD∪AR. BD is the set of individual corrupted gradient, which is discarded by our algorithm. Denote the good gradient to be gi = αiWi, and the bad gradient to be g̃i = δiWi, according to our assumption, we have ‖Wi‖op ≤ C.
Now, we have the l2 norm error:
‖µ(G)− µ(N)‖ = ‖ 1 m m∑ i∈G gi −
( 1
n ∑ i∈A gi + 1 n ∑ i∈B g̃i
) ‖
= ‖ 1 n m∑ i=1 n m gi −
( 1
n ∑ i∈A gi + 1 n ∑ i∈B g̃i
) ‖
= ‖ 1 n ∑ i∈A n m gi + 1 n ∑ i∈AD n m gi + 1 n ∑ i∈AR n m gi −
( 1
n ∑ i∈A gi + 1 n ∑ i∈B g̃i
) ‖
= ‖ 1 n ∑ i∈A ( n−m m )gi + 1 n ∑ i∈AD n m gi + 1 n ∑ i∈AR n m gi − 1 n ∑ i∈B g̃i‖
≤ ‖ 1 n ∑ i∈A ( n−m m )gi + 1 n ∑ i∈AD n m gi + 1 n ∑ i∈AR n m gi‖+ ‖ 1 n ∑ i∈B g̃i‖ (1)
Let |A| = x, we have |B| = n−x = (1− )m−x, |AR| = m−n = m, |AD| = m−|A|−|AR| = m− x− (m− n) = n− x = (1− )m− x. Thus, we have:
‖µ(G)− µ(N)‖ ≤ ‖ ∑ A m− n nm gi + ∑ AD 1 m gi + ∑ AR 1 m gi‖+ ∑ B 1 n ‖g̃i‖
≤ ∑ A ‖m− n nm gi‖+ ∑ AD ‖ 1 m gi‖+ ∑ AR ‖ 1 m gi‖+ ∑ B 1 n ‖g̃i‖
For individual gradient, according to the label corruption gradient definition in problem 2, assuming the ‖W‖op ≤ C, we have ‖gi‖ ≤ ‖αi‖‖Wi‖op ≤ C‖αi‖. Also, denote maxi ‖αi‖ = k, maxi ‖δi‖ = v, we have ‖gi‖ ≤ Ck, ‖g̃i‖ ≤ Cv.
‖µ(G)− µ(N)‖ ≤ Cxm− n nm k + C(n− x) 1 m k + C(m− n) 1 m k + C(n− x) 1 n v
Note the above upper bound holds for any x, thus, we would like to get the minimum of the upper bound respect to x. Rearrange the term, we have
‖µ(G)− µ(N)‖ ≤ Cx(m− n nm − 1 m )k + Cn 1 m k + C(m− n) 1 m k + C(n− x) 1 n v
= C 1 m ( 2 − 1 1− )xk + Ck + Cv − 1 n Cxv
= Cx ( k(2 − 1) m(1− ) − v n ) + Ck + Cv
= Cx ( k(2 − 1)− v m(1− ) ) + Ck + Cv
Since when < 0.5, k(2 − 1)− v m(1− ) < 0, we knew that x should be as small as possible to continue the bound. According to our algorithm, we knew n −m = m(1 − ) −m = (1 − 2 )m ≤ x ≤ n = (1− )m. Then, substitute x = (1− 2 )m, we have
‖µ(G)− µ(N)‖ ≤ Ck(1− 2 )2 − 1 1− + Ck + Cv − Cv 1− 2 1−
= Ck 3 − 4 2
1− + Cv 1−
A.11 PROOF OF THEOREM 3
According to algorithm3, we could guarantee that v ≤ k, then, we will have:
‖µ(G)− µ(N)‖ ≤ Ck 3 − 4 2
1− + Cv 1−
≤ Ck 4 − 4 2 1− = 4 Ck = O( √q)(C is constant, k is the norm of q-dimensional vector)
A.12 COMPARISON BETWEEN SORTING LOSS LAYER GRADIENT NORM AND SORTING THE
LOSS VALUE
Assume we have a d class label y ∈ Rd, where yk = 1, yi = 0, i 6= k. We have two prediction p ∈ Rd, q ∈ Rd. Assume we have a d class label y ∈ Rd, where yk = 1, yi = 0, i 6= k. With little abuse of notation, suppose we have two prediction p ∈ Rd, q ∈ Rd. Without loss of generality, we could assume that p1 has smaller cross entropy loss, which indicates pk ≥ qk For MSE, assume we have opposite result
‖p− y‖2 ≥ ‖q− y‖2 ⇒ ∑ i 6=k p2i + (1− pk)2 ≥ ∑ i6=k q2i + (1− qk)2 (2)
For each pi, i 6= k, We have
V ar(pi) = E(p 2 i )− E(pi)2 =
1 d− 1 ∑ i 6=k p2i − 1 (d− 1)2 (1− pk)2 (3)
Then ∑ i 6=k p2i + (1− pk)2 ≥ ∑ i 6=k q2i + (1− qk)2
⇒V ari 6=k(pi) + d
(d− 1)2 (1− pk)2 ≥ V ari 6=k(qi) +
d
(d− 1)2 (1− qk)2
⇒V ari 6=k(pi)− V ari 6=k(qi) ≥ d (d− 1)2 ( (1− qk)2 − (1− pk)2 ) ⇒V ari 6=k(pi)− V ari 6=k(qi) ≥ d
(d− 1)2 ((pk − qk)(2− pk − qk))
(4) | 1. What are the significant omissions in the related work section of the paper?
2. How accurately does the related work section portray existing results in robust mean estimation and gradient calculation?
3. What is the theoretical foundation underlying collaborative learning methods?
4. How original are the theorems presented in the paper compared to prior works?
5. Are there any gaps or inconsistencies in the arguments made by the reviewer regarding the paper's contributions and limitations? | Review | Review
The related work section misses MANY related results on corrupted data and robust mean estimation.
The related work section forget to mention existing theoretical results that apply robust mean estimation for robust gradient calculation.
The related work section does not provide an accurate overview of existing results. For example, "the algorithms themselves are NP-hard" is not the correct statement -- NP-hard describes the hardness of a problem, not an algorithm.
Collaborative learning methods seem to have no solid theoretical understanding and it is unclear why the proposed algorithm build on top of it.
Regarding the novelty of the theorems: Theorem 1 studies convergence of biased gradient, which is another known research topic and has been studied before, but the authors have not discuss/compare their results with existing ones and the novelty may be overclaimed. Theorem 3 is for robustness guarantee with corruption only in the supervision, and existing results have shown O(\epsilon) guarantee (for linear regression and its variants).
I have not listed all the missing literature (I believe they are easy to find after a careful literature review), but I can add comment later if needed. |
ICLR | Title
Provable Robust Learning for Deep Neural Networks under Agnostic Corrupted Supervision
Abstract
Training deep neural models in the presence of corrupted supervisions is challenging as the corrupted data points may significantly impact the generalization performance. To alleviate this problem, we present an efficient robust algorithm that achieves strong guarantees without any assumption on the type of corruption, and provides a unified framework for both classification and regression problems. Different from many existing approaches that quantify the quality of individual data points (e.g., loss values) and filter out data points accordingly, the proposed algorithm focuses on controlling the collective impact of data points on the averaged gradient. Even when a corrupted data point failed to be excluded by the proposed algorithm, the data point will have very limited impact on the overall loss, as compared with state-of-the-art filtering data points based on loss values. Extensive empirical results on multiple benchmark datasets have demonstrated the robustness of the proposed method under different types of corruptions.
1 INTRODUCTION
Corrupted supervision is a common issue in real-world learning tasks, where the learning targets are not accurate due to various factors in the data collection process. In deep learning models, such corruptions are especially severe, whose degree-of-freedom makes them easily memorize corrected examples and susceptible to overfitting (Zhang et al., 2016).
There are extensive efforts to achieve robustness against corrupted supervisions. A natural approach to deal with corrupted supervision in deep neural networks (DNNs) is to reduce the model exposure to corrupted data points during training. By detecting and filtering (or re-weighting) the possible corrupted samples, the learning is expected to deliver a model that is similar to the one trained on clean data (without corruption) (Kumar et al., 2010; Han et al., 2018; Zheng et al., 2020). There are different criteria designed to identify the corrupted data points in training. For example, Kumar et al. (2010); Han et al. (2018); Jiang et al. (2018) leveraged the loss function values of data points; Zheng et al. (2020) tapped prediction uncertainty for filtering data; Malach & Shalev-Shwartz (2017) used the disagreement between two deep networks; Reed et al. (2014) utilized the prediction consistency of neighboring iterations. The success of these methods highly depends on the effectiveness of the detection criteria in correctly identifying the corrupted data points. Since the corrupted labels remain unknown throughout the learning, such “unsupervised” detection approaches may not be effective, either lack theoretical guarantees of robustness (Han et al., 2018; Reed et al., 2014; Malach & Shalev-Shwartz, 2017; Li et al., 2017) or provide guarantees under assumptions of the availability of prior knowledge about the type of corruption (Zheng et al., 2020; Shah et al., 2020; Patrini et al., 2017; Yi & Wu, 2019). Besides, another limitation of many existing approaches is that, they are exclusively designed for classification problems (e.g., Malach & Shalev-Shwartz (2017); Reed et al. (2014); Menon et al. (2019); Zheng et al. (2020)) and are not straightforward to extend to solve regression problems.
To tackle these challenges, this paper presents a unified optimization framework with robustness guarantees without any assumptions on how supervisions are corrupted, and is applicable to both classification and regression problems. Instead of developing an accurate criterion for detection corrupted samples, we adopt a novel perspective and focus on limiting the collective impact of corrupted samples during the learning process through robust mean estimation of gradients. Specifically, if our estimated average gradient is close to the gradient from the clean data during the learning iterations,
then the final model will be close to the model trained on clean data. As such, a corrupted data point can still be used during the training when it does not considerably alter the averaged gradient. This observation has remarkably impact on our algorithm design: instead of explicitly quantifying (and identifying) individual corrupted data points, which is a hard problem in itself, we are now dealing with an easier task, i.e., eliminating training data points that significantly distort the mean gradient estimation. One immediate consequence of this design is that, even when a corrupted data point failed to be excluded by the proposed algorithm, the data point is likely to have very limited impact on the overall loss, as compared with state-of-the-art filtering data points based on loss values. We perform experiments on both regression and classification with corrupted supervision on multiple benchmark datasets. The results show that the proposed method outperforms state-of-the-art.
2 BACKGROUND
Learning from corrupted data (Huber, 1992) has attracted considerable attention in the machine learning community (Natarajan et al., 2013). Many recent studies have investigated robustness of classification tasks with noisy labels. For example, Kumar et al. (2010) proposed a self-paced learning (SPL) approach, which assigns higher weights to examples with smaller loss. A similar idea was used in curriculum learning (Bengio et al., 2009), in which the model learns easy samples first before learning harder ones. Alternative methods inspired by SPL include learning the data weights (Jiang et al., 2018) and collaborative learning (Han et al., 2018; Yu et al., 2019). Label correction (Patrini et al., 2017; Li et al., 2017; Yi & Wu, 2019) is another approach, which revises original labels in data with a goal to recover clean labels from corrupt ones. However, since we do not have access to which data points are corrupted, it is hard to get provable guarantees for label correction without strong assumptions regarding the corruption type.
Accurate estimation of gradients is a key step for successful optimization. The relationship between gradient estimation and its final convergence has been widely studied in the optimization community. Since computing an approximated (and potentially biased) gradient is often more efficient than computing the exact gradient, many studies used approximated gradients to optimize their models and showed that they suffer from the biased estimation problem if there is no assumptions on the gradient estimation (d’Aspremont, 2008; Schmidt et al., 2011; Bernstein et al., 2018; Hu et al., 2020; Ajalloeian & Stich, 2020).
A closely related topic is robust estimation of the mean. Given corrupted data, robust mean estimation aims at generating an estimated mean µ̂ such that the difference between the estimated mean on corrupted data and the mean of clean data ‖µ̂− µ‖2 is minimized. It was showed that median or trimmed-mean are the optimal statistics for mean estimation in one-dimensional data (Huber, 1992). However, robustness in high dimension is quite challenging since applying the coordinate-wise optimal robust estimator would lead to an error factor O( √ d) that scales with the data dimension. Although some classical work, such as Tukey median (Tukey, 1975), successfully designed algorithms to get rid of the O( √ d) error, the algorithms themselves are not polynomial-time algorithm. More recently, Diakonikolas et al. (2016); Lai et al. (2016) successfully designed polynomial-time algorithms with dimension-free error bounds. The results have been widely applied to improve algorithmic efficiency in various scenarios (Dong et al., 2019; Cheng et al., 2020).
Robust optimization aims to optimize the model given corrupted data. Many previous studies improve the robustness of the optimization in different problem settings. However, most of them either study linear regression and its variantes(Bhatia et al., 2015; 2017; Shen & Sanghavi, 2019) or study the convex optimization (Prasad et al., 2018). Thus, those results cannot be directly generalized to deep neural networks. Diakonikolas et al. (2019) is a very generalized non-convex optimization method with the agnostic corruption guarantee. However, the space complexity of the algorithm is high, thus cannot be applied to deep neural networks given current hardware limitations.
3 METHODOLOGY
Before introducing our algorithm, we first discuss the corrupted supervision. To characterize agnostic corruptions, we make use of an adversary that tries to corrupt the supervision of a clean data. There is no limitation on how the adversary corrupts the supervision, which can either be randomly permuting the target, or in a way that maximizes the negative impact (i.e., lower performance).
Firstly, the adversary can choose up to fraction of the clean target Dy ∈ Rn×q and change the selected row of Dy to arbitrary valid numbers, generating D y ∈ Rn×q . Then, the adversary returns the corrupted dataset Dx, D y to our learning algorithmA. In this process, the only constraint on the adversary is the fraction, and the adversary has full knowledge of the data, and even the learning algorithm A. A natural question to ask is: Given a data set with -fraction corrupted supervision Dx ∈ Rn×p, D y , and a learning objective φ : Rp × Rq × Rd → R parameterized by θ, can we output parameters θ ∈ Rd such that ‖∇θφ(θ;Dx,Dy)‖ is minimized. When = 0, we have D y = Dy and learning is done on the clean data. The stochastic gradient descent could converge to a stationary point, where ‖∇θφ(θ;Dx,Dy)‖ = 0. However, when the supervision is corrupted as above, this is not the case any more, due to the error in θ impacted by the corrupted data. We thus want an efficient algorithm to find a model θ that minimizes ‖∇θφ(θ;Dx,Dy)‖. A robust model θ should have a small value of ‖∇θφ(θ;Dx,Dy)‖, and we hypothesize that a smaller ‖∇θφ(θ;Dx,Dy)‖ has better generalization.
3.1 STOCHASTIC GRADIENT DESCENT WITH BIASED GRADIENT
A direct consequence of corrupted supervision is biased gradient estimation. In this section, we will first analyze how such biased gradient estimation affects the robustness of learning. The classical analysis of stochastic gradient descent (SGD) requires access to the stochastic gradient oracle, which is an unbiased estimation of the true gradient. However, corrupted supervision leads to corrupted gradients, and it is thus difficult to get unbiased gradient estimation without assumptions of how the gradients are corrupted. We start the analysis by the following informal theorem (without elaborated discussions of assumptions) of how biased gradient affects the final convergence of SGD. Its formal version is provided in Theorem 4, Appendix.
Theorem 1 (Convergence of Biased SGD (Informal)) Under mild assumptions, denote ζ to be the maximum `2 norm of the difference between clean minibatch gradient and corrupted minibatch gradient ‖g− g̃‖ ≤ ζ, then by using biased gradient estimation, SGD converges to the ζ-approximated stationary points: E‖∇φ(θt)‖2 = O(ζ2). Remark 1 In the corrupted supervision setting, let the gradient estimated by corrupted data D be ĝ, the gradient estimated by clean data D be g. Assume ‖g̃ − g‖ ≤ ζ, it follows that when using corrupted dataset in SGD, it converges to the ζ-approximated stationary point of the objective defined by the clean data. Note the difference between above theorem and typical convergence theorem is that we are using a biased gradient estimation.
According to Theorem 1 and the remark, a robust estimation of the gradient g is the key to ensure a robust model (converge to the clean solution). We also assume the loss function has the form of L(y, ŷ), where many commonly used loss functions fall in this category.
3.2 ROBUST GRADIENT ESTIMATION FOR GENERAL DATA CORRUPTION
We first introduce Algo. 2 for general corruption (i.e. corruption on both features and/or supervisions). The algorithm excludes the data points with large gradient norms, and uses the empirical mean of the remaining to update gradients. In Thm. 2 we give its robustness property.
Algorithm 1: Robust Mean Estimation for Corrupted Gradient input: gradient matrix G ∈ m× d, corruption rate return estimated mean µ̂ ∈ Rd ; 1. For each row zi in G, calculate the l2 norm ‖zi‖ 2. Choose the -fraction rows with large ‖zi‖ 3. Remove those selected rows, and return the empirical mean of the rest points as µ̂.
Assumption 1 (Individual L-smooth loss) For every individual loss function φi, there exists constant L > 0, such that for a clean sample i, we have |φi(x) − φi(y)| ≤ L|x − y| for any x,y.
Theorem 2 (Robust Gradient Estimation For Data Corruption) Let G̃ ∈ Rm×d be a corrupted gradient matrix, and G ∈ Rm×d be the clean gradient matrix. Let µ be the empirical mean function,
Algorithm 2: (PRL(G)) Provable Robust Learning for General Corrupted Data input: Label corrupted dataset Dx,D y , learning rate γt; return model parameter θ; for t = 1 to maxiter do
Randomly sample a minibatch M from Dx,D y Calculate the individual gradient G̃ for M Apply Algorithm 1 on G̃ to get robust gradient estimation µ̂ Update model θt+1 = θt − γtµ̂
end we have that the output of Algo. 1 µ̂ of G̃ satisfies ‖µ(G) − µ̂‖ = O( √ d). Moreover, if Asm. 1 holds, we further have ‖µ(G)− µ̂‖ = O( L).
Combining with the aforementioned convergence analysis of biased SGD, we get the following:
Corollary 1 (Robust Optimization For Corrupted Data) Given assumptions used in Thm. 1, and Asm. 1, applying Algo. 1 to any -fraction corrupted data, we get mint∈[T ] E‖∇φ(xt)‖ = O( L) with large enough T . If Asm. 1 does not hold, then we get mint∈[T ] E‖∇φ(xt)‖ = O( √ d) with large enough T .
The robustness guarantee states that even training on generally corrupted data (corrupted supervision is a special case), Algo. 2 guarantee that the gradient norm on remaining data cannot be too large. Since Thm. 2 gives a dimension-free error bound when Asm. 1 holds, Corollary 1 also gives the dimension-free robustness guarantee with Asm. 1. We defer the detailed discussion ofO( L) to later sections. Although the error bound O( L) sounds good, we note that it still has several drawbacks: First, the dimension-free error bound means the error does not grow with increasing dimensions, and is critical when working with neural networks, due to the extremely large gradient dimension (i.e., #parameters of neural network). Thm. 2 gives the dimension-free error bound only when Asm. 1 holds, which is quite strong. In addition, even when Asm. 1 holds, L can be large, leading to a large gradient estimation error. Existing work (Diakonikolas et al., 2019) already acheives the dimensionfreeO( √ ) guarantee with general corruptions, which is a much more better theoretical results than above theorem. However, in practice, we found that the gradient norms of deep neural networks for individual data points are usually not very large, even at the beginning of the training. This can be partially due to the network structure. Further discussion on this issue is beyond the scope of this paper, but the theoretical bound above states that the robustness should depend on the number of parameters for the general models.
Another concern of Alg. 2 is the efficiency. It requires computing individual gradients. Although there are some advanced approaches to get the individual gradient, e.g., (Goodfellow, 2015), it is still relatively slow as compared to commonly used back-propagation. Moreover, these methods are usually not compatible with popular components such as batch normalization (BN) since the individual gradients are not independent inside BN, using of which will lose the benefits from parallelization.
3.3 ROBUST GRADIENT ESTIMATION FOR ONE DIMENSIONAL CORRUPTED SUPERVISION
In this section, we show that the above robustness bound can be improved if we assume the corruption only comes from supervision. Also, by fully exploiting the gradient structure of the corrupted supervision, our algorithm can be much more efficient and meanwhile compatible with batch normalization. We use the one dimensional supervision setting (binary classification or single-target regression) to illustrate this intuition and extend it more general settings in the next section. Consider a high-dimensional supervised learning problem with X ∈ Rn×p and y ∈ Rn. The goal is to learn a function f parameterized by θ ∈ Rd minimizing the following loss minθ ∑n i=1 φi =
minθ ∑n i=1 L(yi, f(xi, θ)). The gradient for a data point i is ∇θφi = ∂li ∂fi ∂fi ∂θ = αigi.
One key observation is that: when only supervision is corrupted, then the corruption contributes only to the term αi = ∂li∂fi , which is a scalar in the one-dimensional setting. In other words, given the clean gradient of ith point, gi ∈ Rd, the corrupted supervision can only perturbs the the length of the gradient vector, changing the gradient from αigi to δigi, where δi =
∂l i ∂fi . When αi and δi are
known, then we can easily eliminate the impact from corrupted supervision. But this is not the case since we have have only the possibly corrupted target ŷi as opposed to the ground truth yi.
On the other hand, the fact that corrupted supervision scales the clean gradient can be used to reshape the robust optimization problem. Recall that in every iteration, we update our model by θ+ = θ− γµ(G), where µ denotes the empirical mean function and G = [∇θφT1 , . . . ,∇θφTm] ∈ Rm×d is the gradient matrix with mini-batch size m. We then have the following:
Problem 1 (Robust Gradient Estimation for Corrupted Supervision - One Dimensional Case) Given a clean gradient matrix G ∈ Rm×d, an -corrupted matrix G̃ with at most -fraction rows are corrupted from αigi to δigi, design an algorithmA : Rm×d → Rd that minimizes ‖µ(G)−A(G̃)‖.
Note that when ‖δi‖ is large, the corrupted gradient will have large effect on the empirical mean, and vice versa. This motivates us to develop an algorithm that filters out data points by the loss layer gradient ‖ ∂li∂fi ‖. If the norm of the loss layer gradient of a data point is large (in one-dimensional case, this gradient reduces to a scalar and the norm becomes its absolute value), we exclude the data point when computing the empirical mean of gradients for this iteration. Note that this algorithm is applicable to both regression and classification problems. Especially, when using the mean squared error (MSE) loss for regression, its gradient norm is exactly the loss itself, and the algorithm reduces to self-paced learning Kumar et al. (2010). We summarize the procedure in Alg. 3 and extend it to the more general multi-dimension case in the next section.
Algorithm 3: (PRL(L)) Efficient Provable Robust Learning for Corrupted Supervision input: dataset Dx,D y with corrupted supervision, learning rate γt; return model parameter θ; for t = 1 to maxiter do
Randomly sample a minibatch M from Dx,D y Compute the predicted label Ŷ from M Calculate the gradient norm for the loss layer, (i.e. ‖ŷ − y‖ for mean square error or cross entropy)
for each data point in M Remove the top τ -fraction data from M according to ‖ŷ − y‖ Return the empirical mean of the remaining M as the robust mean estimation µ̂ Update model θt+1 = θt − γtµ̂
end
3.4 EXTENSION TO MULTI-DIMENSIONAL CORRUPTED SUPERVISION
To extend our algorithm and analysis to multi-dimensional case, let q to be the supervision dimension, the gradient for each data point is ∇θφi = ∂li∂fi ∂fi ∂θ , where ∂li ∂fi ∈ Rq is the gradient of loss respect to model outputs, and ∂fi∂θ ∈ R q×d is the gradient of model outputs respect to model parameters. Similarly, when the supervision is corrupted, the corruption comes from the term ∂li∂fi , which is a vector. Let δi = ∂l i ∂fi ∈ Rq , αi = ∂li∂fi ∈ R q , Wi = ∂fi∂θ ∈ R q×d, m be the minibatch size. Denote the clean gradient matrix G ∈ Rm×d, where the ith row of gradient matrix gi = αiWi. Now the multi-dimensional robust gradient estimation problem is defined by:
Problem 2 (Robust Gradient Estimation for Corrupted Supervision - Multi-Dimensional Case) Given a clean gradient matrix G, an -corrupted matrix G̃ with at most -fraction rows are corrupted from αiWi to δiWi, design an algorithmA : Rm×d → Rd that minimizes ‖µ(G)−A(G̃)‖.
We start our analysis by investigating the effects of the filtering-base algorithm, i.e. use the empirical mean gradient of (1 − )-fraction subset to estimate the empirical mean gradient of clean gradient matrix. We have the following for a randomized filtering-based algorithm(proof in Appendix):
Lemma 1 (Gradient Estimation Error for Random Dropping -fraction Data) Let G̃ ∈ Rm×d be a corrupted matrix generated as in Problem 2, and G ∈ Rm×d be the original clean gradient matrix. Suppose arbitrary (1− )-fraction rows are selected from G̃ to form the matrix N ∈ Rn×d. Let µ be the empirical mean function. Assume the clean gradient before loss layer has bounded
operator norm, i.e., ‖W‖op ≤ C, the maximum clean gradient in loss layer maxi∈G ‖αi‖ = k, the maximum corrupted gradient in loss layer maxi∈N ‖δi‖ = v, then we have:
‖µ(G)− µ(N)‖ ≤ Ck 3 − 4 2
1− + Cv 1− .
We see that v is the only term that is related to the corrupted supervision. If v is large, then the bound is not safe since the right-hand side can be arbitrarily large (i.e. an adversary can change the label in a way such that v is extremely large). Thus controlling the magnitude of v provides a way to effectively reduce the bound. For example, if we manage to control v ≤ k, then the bound is safe. This can be achieved by sorting the gradient norms at the loss layer, and then discarding the largest -fraction data points. We thus have the following result.
Theorem 3 (Robust Gradient Estimation For Supervision Corruption) Let G̃ be a corrupted matrix generated in Problem 2, q be the label dimension, µ be the empirical mean of clean matrix G. Assume the maximum clean gradient before loss layer has bounded operator norm: ‖W‖op ≤ C, then the output of gradient estimation in Algo 3 µ̂ satisfies ‖µ− µ̂‖ = O( √q) ≈ O( ).
Compare Thm. 2 and Thm. 3, we see that when the corruption only comes from supervision, the dependence on d is reduced to q, where in most deep learning cases we have d n. Applying Thm 1 directly shows that our algorithm is also robust in multi-label settings.
3.5 COMPARISON WITH DIAKONIKOLAS ET AL. (2019) AND OTHER METHODS
SEVER (Diakonikolas et al., 2019) showed promising state-of-the-art theoretical results in general corruptions, which achievesO( √ ) dimension-free guarantee for general corruptions. Compared to Diakonikolas et al. (2019), we have two contributions: a). By assuming the corruption comes from the label (we admit that this is quite strong compared to the general corruption setting), we could get a better error rate. b). Our algorithm can be scaled to deep neural networks while Diakonikolas et al. (2019) cannot. We think this is a contribution considering the DNN based models are currently state-of-the-art methods for noisy label learning problems (at least in empirical performance).
Although Diakonikolas et al. (2019) achieves very nice theoretical results, unfortunately, it cannot be applied to DNN with the current best hardware configuration. Diakonikolas et al. (2019) uses dimension-free robust mean estimation breakthroughs to design the learning algorithm, while we notice that most robust mean estimation relies on filtering out data by computing the score of projection to the maximum singular vector. For example, in Diakonikolas et al. (2019), it requires performing SVD on n×d individual gradient matrix, where n is the sample size and d is the number of parameters. This method works well for small datasets and small models since both n and d is small enough for current memory limitation. However, for deep neural networks, this matrix size is far beyond current GPU memory capability. That could be the potential reason why in Diakonikolas et al. (2019), only ridge regression and SVM results for small data are shown (we are not saying that they should provide DNN results). In our experiment, our n is 60000 and d is in the magnitude of millions (network parameters). It is impractical to store 60000 copies of neural networks in a single GPU card. In contrast, in our algorithm, we do not need to store the full gradient matrix. By only considering the loss-layer gradient norm, we can easily extend our algorithm to DNN, and we showed that this simple strategy works well in both theory and challenging empirical tasks.
We notice that there are some linear (Bhatia et al., 2015; 2017) or convex method (Prasad et al., 2018) achieves the better robustness guarantee. However, most of them cannot be directly applied to deep neural networks.
4 RELATIONSHIP TO SELF-PACED LEARNING (SPL)
SPL looks very similar to our method at first glance. Instead of keeping data point with small gradient norm, SPL tries to keep data with small loss. The gradient norm and loss function can be tied by the famous Polyak-Łojasiewicz (PL) condition. The PL condition assumes there exists some constant s > 0 such that 12‖∇φ(x)‖
2 ≥ s (φ(x)− φ∗) , ∀x holds. As we can see, when the neural network is highly over-parameterized, the φ∗ can be assumed to be equal across different
samples since neural networks can achieve 0 training loss (Zhang et al., 2016). By sorting the error φ(xi) for every data point, SPL actually is sorting the lower bound of the gradient norm if the PL condition holds. However, the ranking of gradient norm and the ranking loss can be very different since there is no guarantee that the gradient norm is monotonically increasing with the loss value. We provide illustration of why SPL is not robust from geometric perspective in the appendix. Here we show even for simple square loss, the monotonic relationship is easy to break. One easy counter-example is φ(x1, x2) = 0.5x21 + 50x 2 2. Take two points (1000, 1) and (495, - 49.5), we will find the monotonic relationship does not hold for these two points. Nocedal et al. (2002) showed that the monotonic relationship holds for square loss (i.e.φ(x) = 12 (x−x
∗)TQ(x− x∗) ) if the condition number of Q is smaller than 3 + 2 √ 2, which is a quite strong assumption especially when x is in high-dimension. If we consider the more general type of loss function (i.e. neural network), the assumptions on condition number should only be stronger, thus breaking the monotonic relationship. Thus, although SPL sorts the lower bound of the gradient norm under mild assumptions, our algorithm is significantly different from the proposed SPL and its variations.
Now, we discuss the relationship between SPL and algorithm 3 under supervision corruptions. SPL has the same form as algorithm 3 when we are using mean square error to perform regression tasks since the loss layer gradient norm is equal to loss itself. However, in classification, algorithm 3 is different from the SPL. In order to better understand the algorithm, we further analyze the difference between SPL and our algorithm for cross-entropy loss.
For cross entropy, denote the output logit as o, we have H(yi, fi) = −〈yi, log(softmax(oi))〉 = −〈yi, log(fi)〉. The gradient norm of cross entropy w.r.t. oi is: ∂Hi ∂oi
= yi− softmax(oi) = fi−yi. Thus, the gradient of loss layer is the MSE between yi and fi. Next, we investigate when MSE and Cross Entropy gives non-monotonic relationship. For the sake of simplification, we only study the sufficient condition of the non-monotonic relationship, which is showed in lemma 2.
Lemma 2 Let y ∈ Rq , where yk = 1,yi = 0 for i 6= k, and α, β are two q dimensional vector in probability simplex. Without loss of generality, suppose α has smaller cross entropy loss αk ≥ βk, then the sufficient condition for ‖α − y‖ ≥ ‖β − y‖ is Vari 6=k({αi}) − Vari 6=k({βi}) ≥
q (q−1)2 ((αk − βk)(2− αk − βk))
As αk ≥ βk, the right term is non-negative. In conclusion, when MSE generates a different result from cross-entropy, the variance of the probability of the non-true class of the discarded data point is larger. Suppose we have a ground-truth vector y = [0, 1, 0, 0, 0, 0, 0, 0, 0, 0], and we have two predictions α = [0.08, 0.28, 0.08, 0.08, 0.08, 0.08, 0.08, 0.08, 0.08, 0.08] and β = [0.1, 0.3, 0.34, 0.05, 0.05, 0.1, 0.03, 0.03, 0, 0]. The prediction α have a smaller mse loss while prediction β have a smaller cross-entropy loss. It is intuitive that β is more likely to be noisy data since it has two peak on the prediction (i.e. 0.3, 0.34). However, since cross entropy loss only considers one dimension, it cannot detect such situation. Compared to the cross-entropy, the gradient (mse loss) considers all dimension, and thus will consider the overall prediction distributions.
5 COMBINING WITH CO-TEACHING STYLE TRAINING
Motivated by co-teaching (Han et al., 2018), which is one of currently state-of-the-art deep methods for learning under noisy label, we propose Co-PRL(L), which has the same framework of coteaching but uses the loss-layer gradient to select the data. The full algorithm is shown in algorithm 4 in the appendix. The meaning of all hyper-parameters in algorithm 4 are all the same as in the original Han et al. (2018). Compared with algorithm 3, except sampling data according to the loss layer gradient norm, the Co-PRL(L) has two other modules. The first is we gradually increase the amount of the data to be dropped. The second is that two networks will exchange the selected data to update their own parameters.
6 EXPERIMENT
In this section, we perform experiments on benchmark regression and classification dataset. The code is available in supplementary materials of submission. We compare PRL(G)(Algo. 2), PRL(L)
(Algo. 3), and Co-PRL(L) (Algo. 4) to the following baselines. Standard: standard training without filtering data (mse for regression, cross entropy for classification); Normclip: standard training with norm clipping; Huber: standard training with huber loss (regression only); Decouple: decoupling network, update two networks by using their disagreement (Malach & Shalev-Shwartz, 2017) (classification only); Bootstrap: It uses a weighted combination of predicted and original labels as the correct labels, and then perform back propagation (Reed et al., 2014) (classification only); Min-sgd: choosing the smallest loss sample in minibatch to update model (Shah et al., 2020); SPL: self-paced learning, dropping the data with large losses (same as PRL(L) in regression setting with MSE loss); Ignormclip: clipping individual gradient then average them to update model (regression only); Co-teaching: collaboratively train a pair of SPL model and exchange selected data to another model(Han et al., 2018) (classification only); It is hard to design experiments for agnostic corrupted supervision and we tried our best to include different types of supervision noise. The supervision corruption settings are as follows: linadv: the corrupted supervision is generated by random wrong linear relationship of features (regression); signflip: the supervision sign is flipped (regression); uninoise: random sampling from uniform distribution as corrupted supervision (regression); mixture: mixture of above types of corruptions (regression); pairflip: shuffle the coordinates (i.e. eyes to mouth in celebA or cat to dog in CIFAR) (regression and classification); symmetric: randomly assign wrong class label (classification). For classification, we use classification accuracy as the evaluation metric, and R-square is used to evaluate regression experiments. Due to the limit of the space, we only show the average evaluation score on testing data for the last 10 epochs. The whole training curves are attached in the appendix. All experiments are repeated 5 times for regression experiments and 3 times for classification experiments.Main hyperparameters are showed in appendix.
6.1 REGRESSION EXPERIMENT
We use CelebA data to perform regression tasks. CelebA dataset has 162770 training images, 19867 validation images, 19962 test images. The target variable is ten-dimensional coordinates of the left eye, right eye, nose, left mouth, and right mouth. Given the human face image, the goal is to predict 10 face landmark coordinates in the image. We tried adding different types of noise on the landmark coordinates. We preprocess the CelebA data as following: we use three-layer CNN to train 162770 training images to predict clean coordinates (we use 19867 validation images to do the early stopping). Then, we use well-trained network to extract the 512-dimensional feature on testing sets. Thus, our final data to perform experiment has feature sets X ∈ R19962×512, and the target variable Y ∈ R19962×10. We further split the data to the training and testing set, where training sets contain 80% of the data. Then, we manually add linadv, signflip, uninoise, pairflip, mixture types of supervision noise on the target variable on training data. The corruption rate for all types of corruption is varied from 0.1 to 0.4. We use 3-layer fully connect networks in experiments. The results of averaged last 10 epoch r-square are in table 1.
6.2 CLASSIFICATION EXPERIMENT
We perform experiments on CIFAR10, and CIFAR100 to illustrate the effectiveness of our algorithm in classification setting. We use the 9-layer Convolutional Neural Network, which is the same as Han et al. (2018). Since most baselines include batch normalization, it is difficult to get individual gradient efficiently, we will drop the ignormclip and PRL baselines. In the appendix, we attached the results if both co-teaching and Co-PRL(L) drops batch normalization module. We will see that coteaching cannot maintain robustness while our method still has robustness. The reason is discussed in the appendix. We consider pairflip and symmetric supervision corruptions in experiments. Also, to compare with the current state of the art method, for symmetric noise, we use corruption rate which beyond 0.5. Although our theoretical analysis assumes the noise rate is small than 0.5, when the noise type is not adversary (i.e. symmetric), we empirically show that our method can also deal with such type of noise. Results on CIFAR10, CIFAR100 are in Table 2. As we can see, no matter using one network (PRL vs SPL) or two networks (Co-PRL(L) vs Co-teaching), our method performs significantly better. Since in real-world problems, it is hard to know that the ground-truth corruption rate, we also perform the sensitivity analysis in classification tasks to show the effect of overestimating and underestimating . The results are in Table 3. More discussion about sensitivity analysis can be found in appendix.
7 CONCLUSION
In this paper, we proposed efficient algorithm to defense against agnostic supervision corruptions. Both theoratical and empirical analysis showed the effectiveness of our algorithm. There are two remaining questions in this paper which deserves study in future. The first one is whether we can further improve O( ) error bound or show that O( ) is tight. The second one is to utilize more properties of neural networks, such as the sparse gradient, to see whether it is possible to get better algorithms.
A APPENDIX
A.1 CO-IGFILTER ALGORITHM
See algorithm 4.
Algorithm 4: Co-PRL(L) input: initialize wf and wg , learning rate η, fixed τ , epoch Tk and Tmax, iterations Nmax return model parameter wf and wg; for T = 1, 2, ..., Tmax do
for N = 1, ..., Nmax do random sample a minibatch M from Dx,D y (noisy dataset) get the predicted label Ŷf and Ŷg from M by wf . wg calculate the individual loss lf = L(Y, Ŷf ), lg = L(Y, Ŷg) calculate the gradient norm of loss layer scoref = ‖
∂lf ∂ŷf ‖, scoreg = ‖ ∂lg ∂ŷg ‖.
sample R(T )% small-loss-layer-gradient-norm instances by scoref and scoreg to get Nf , Ng update wf = wf − η∇wfL(Nf , wf ), wg = wg − η∇wgL(Ng, wg) (selected dataset) update model xt+1 = xt − γtµ̂
end Update R(T ) = 1−min { T
Tk τ, τ } end
A.2 FURTHER ILLUSTRATION OF THE DIFFERENCE BETWEEN SPL AND PRL(G)
In this section, we will further illustrate the difference between SPL and PRL(G). In order to have a more intuitive understanding of our algorithm, we could look at the Figure 1a and 1b. Since we are in the agnostic label corruption setting, it is difficult to filtering out the correct corrupted data. We showed two situations when loss filtering failed and gradient filtering failed. As we could see that when loss filtering method failed, the remaining corrupted data could have large impact on the overall loss surface while when gradient filtering method failed, the remaining corrupted data only have limited impact on the overall loss surface, thus gaining robustness.
A.3 NETWORKS AND HYPERPARAMETERS
The hyperparameters are in Table 4. For Classification, we use the same hyperparameters in Han et al. (2018). For CelebA, we use 3-layer fully connected network with 256 hidden nodes in hidden layer and leakly-relu as activation function. We also attached our code in supplementary materials.
A.4 REGRESSION R2 ON TESTING DATA CURVE
The curve for CelebA data is showed in Figure 2.
A.5 CLASSIFICATION CURVE
The classification curve is in Figure 3
A.6 SENSITIVITY ANALYSIS
Since in real-world problems, it is hard to know that the ground-truth corruption rate, we perform the sensitivity analysis in classification tasks to show the effect of . The results are in Table 5. As we could see, the performance is stable if we overestimate the corruption rate, this is because only
when we overestimate the , we could guarantee that the gradient norm of the remaining set is small. However, when we underestimate the corruption rate, in the worst case, there is no guarantee that the gradient norm of the remaining set is small. By using the empirical mean, even one large bad individual gradient would ruin the gradient estimation, and according to the convergence analysis of biased gradient descent, the final solution could be very bad in terms of clean data. That explains why to underestimate the corruption rate gives bad results. Also, from Table 5, we could see that using the ground truth corruption rate will lead to small uncertainty.
A.7 EMPIRICAL RESULTS ON RUNNING TIME
As we claimed in paper, the algorithm 2 is not efficient. In here we attached the execution time for one epoch for three different methods: Standard, PRL(G), PRL(L). For fair comparison, we replace all batch normalization module to group normalization for this comparison, since it is hard
to calculate individual gradient when using batch normalization. For PRL(G), we use opacus libarary (https://opacus.ai/) to calculate the individual gradient.
The results are showed in Table 6
A.8 PROOF OF CONVERGENCE OF BIASED SGD
We gave the proof of the theorem of how biased gradient affect the final convergence of SGD. We introduce several assumptions and definition first:
Assumption 2 (L-smoothness) The function φ: Rd → R is differentiable and there exists a constant L > 0 such that for all θ1, θ2 ∈ Rd, we have φ(θ2) ≤ φ(θ1)+〈∇φ(θ1), θ2−θ1〉+ L2 ‖θ2−θ1‖ 2
Definition 1 (Biased gradient oracle) A map g : Rd × D → Rd, such that g(θ, ξ) = ∇φ(θ) + b(θ, ξ) + n(θ, ξ) for a bias b : Rd → Rd and zero-mean noise n : Rd × D → Rd, that is Eξn(θ, ξ) = 0.
Compared to standard stochastic gradient oracle, the above definition introduces the bias term b. In noisy-label settings, the b is generated by the data with corrupted labels.
Assumption 3 (σ-Bounded noise) There exists constants σ > 0, such that Eξ‖n(θ, ξ)‖2 ≤ σ, ∀θ ∈ Rd
Assumption 4 (ζ-Bounded bias) There exists constants ζ > 0, such that for any ξ, we have ‖b(θ, ξ)‖2 ≤ ζ2, ∀θ ∈ Rd
For simplicity, assume the learning rate is constant γ, then in every iteration, the biased SGD performs update θt+1 ← θt − γtg(θt, ξ). Then the following theorem showed the gradient norm convergence with biased SGD.
Theorem 4 (Convergence of Biased SGD(formal)) Under assumptions 2, 3, 4, define F =
φ(θ0)− φ∗and step size γ = min
{ 1
L , (
√ LF
σT )
} , denote the desired accuracy as k, then
T = O ( 1
k + σ2 k2 ) iterations are sufficient to obtain mint∈[T ] E‖∇φ(θt)‖2 = O(k + ζ2).
Remark 2 Let k = ζ2, T = O (
1 ζ2 +
σ2 ζ4 ) iterations is sufficient to get mint∈[T ] E‖∇φ(θt)‖2 =
O(ζ2), and performing more iterations does not improve the accuracy in terms of convergence.
Since this is a standard results, similar results are showed in Bernstein et al. (2018); Devolder et al. (2014); Hu et al. (2020); Ajalloeian & Stich (2020). we provide the proof here. Proof: by L-smooth, we have:
φ(θ2) ≤ φ(θ1) + 〈∇φ(θ1), θ2 − θ1〉+ L
2 ‖θ2 − θ1‖2
by using γ ≤ 1 L , we have Eφ (θ1t+1) ≤ φ (θ1t)− γ 〈∇φ (θ1t) ,Egt〉+ γ2L
2
( E ‖gt − Egt‖2 + E ‖Egt‖2 ) = φ (θ1t)− γ 〈∇φ (θ1t) ,∇φ (θ1t) + bt〉+ γ2L
2
( E ‖nt‖2 + E ‖∇φ (θ1t) + bt‖ 2 )
≤ φ (θ1t) + γ
2
( −2 〈∇φ (θ1t) ,∇φ (θ1t) + bt〉+ ‖∇φ (θ1t) + bt‖ 2 ) + γ2L
2 E ‖nt‖2
= φ (θ1t) + γ
2
( −‖∇φ (θ1t)‖ 2 + ‖bt‖2 ) + γ2L
2 E ‖nt‖2
Since we have ‖bt‖2 ≤ ζ2, ‖nt‖2 ≤ σ2, by plug in the learning rate constraint, we have
Eφ (θ1t+1) ≤ φ (θ1t)− γ
2 ‖∇φ (θ1t)‖
2 + γ
2 ζ2 +
γ2L
2 σ2
Eφ (θ1t+1)− φ (θ1t) ≤ − γ
2 ‖∇φ (θ1t)‖
2 + γ
2 ζ2 +
γ2L
2 σ2
Then, removing the gradient norm to left hand side, and sum it across different iterations, we could get
1
2T T−1∑ t=0 E‖φ (θ1t) ‖ ≤ F Tγ + ζ2 2 + γLσ2 2
Take the minimum respect to t and substitute the learning rate condition will directly get the results.
A.9 PROOF OF THEOREM 2
Denote G̃ to be the set of corrupted minibatch, G to be the set of original clean minibatch and we have |G| = |G̃| = m. Let N to be the set of remaining data and according to our algorithm, the remaining data has the size |N| = n = (1 − )m. Define A to be the set of individual clean gradient, which is not discarded by algorithm 1. B to be the set of individual corrupted gradient, which is not discarded. According to our definition, we have N = A ∪ B. AD to be the set of individual good gradient, which is discarded, AR to be the set of individual good gradient, which is replaced by corrupted data. We have G = A∪AD∪AR. BD is the set of individual corrupted gradient, which is discarded by our algorithm. Denote the good gradient to be gi = αiWi, and the bad gradient to be g̃i, according to our assumption, we have ‖g̃i‖ ≤ L. Now, we have the l2 norm error:
‖µ(G)− µ(N)‖ = ‖ 1 m m∑ i∈G gi −
( 1
n ∑ i∈A gi + 1 n ∑ i∈B g̃i
) ‖
= ‖ 1 n m∑ i=1 n m gi −
( 1
n ∑ i∈A gi + 1 n ∑ i∈B g̃i
) ‖
= ‖ 1 n ∑ i∈A n m gi + 1 n ∑ i∈AD n m gi + 1 n ∑ i∈AR n m gi −
( 1
n ∑ i∈A gi + 1 n ∑ i∈B g̃i
) ‖
= ‖ 1 n ∑ i∈A ( n−m m )gi + 1 n ∑ i∈AD n m gi + 1 n ∑ i∈AR n m gi − 1 n ∑ i∈B g̃i‖
≤ ‖ 1 n ∑ i∈A ( n−m m )gi + 1 n ∑ i∈AD n m gi + 1 n ∑ i∈AR n m gi‖+ ‖ 1 n ∑ i∈B g̃i‖
≤ ‖ ∑ A m− n nm gi + ∑ AD 1 m gi + ∑ AR 1 m gi‖+ ∑ B 1 n ‖g̃i‖
≤ ∑ A ‖m− n nm gi‖+ ∑ AD ‖ 1 m gi‖+ ∑ AR ‖ 1 m gi‖+ ∑ B 1 n ‖g̃i‖
By using the filtering algorithm, we could guarantee that ‖g̃i‖ ≤ L. Let |A| = x, we have |B| = n− x = (1− )m− x, |AR| = m− n = m, |AD| = m− |A| − |AR| = m− x− (m− n) = n− x = (1− )m− x. Thus, we have:
‖µ(G)− µ(N)‖ ≤ xm− n nm L+ (n− x) 1 m L+ (m− n) 1 m L+ (n− x) 1 n L
≤ x(m− n nm − 1 m )L+ n 1 m L+ (m− n) 1 m L+ (n− x) 1 n L = 1
m ( 2 − 1 1− )xL+ L+ L− 1 n xL
= xL( 2 − 2 n ) + 2L
To minimize the upper bound, we need x to be as small as possible since 2 − 2 < 1. According to our problem setting, we have x = n−m ≤ (1− 2 )m, substitute back we have:
‖µ(G)− µ(N)‖ ≤ (1− 2 )Lm(2 − 2 n ) + 2L
= 1− 2 1− 2L+ 2L
= 4L− 1− 2L
Since < 0.5, we use tylor expansion on 1− , by ignoring the high-order terms, we have
‖µ(G)− µ(N)‖ = O( L)
Note, if the Lipschitz continuous assumption does not hold, then L should be dimension dependent.
A.10 PROOF OF RANDOMIZED FILTERING ALGORITHM
Lemma 3 (Gradient Estimation Error for Randomized Filtering) Given a corrupted matrix G̃ ∈ Rm×d generated in problem 2. Let G ∈ Rm×d be the original clean gradient matrix. Suppose we are arbitrary select n = (1− )m rows from G̃ to get remaining set N ∈ Rn×d. Let µ to be the empirical mean function, assume the clean gradient before loss layer has bounded operator norm: ‖W‖op ≤ C, the maximum clean gradient in loss layer maxi ‖αi‖ = k, the maximum corrupted gradient in loss layer maxi ‖δi‖ = v, assume < 0.5, then we have:
‖µ(G)− µ(N)‖ ≤ Ck 3 − 4 2
1− + Cv 1−
A.10.1 PROOF OF LEMMA 3
Denote G̃ to be the set of corrupted minibatch, G to be the set of original clean minibatch and we have |G| = |G̃| = m. Let N to be the set of remaining data and according to our algorithm, the remaining data has the size |N| = n = (1 − )m. Define A to be the set of individual clean gradient, which is not discarded by algorithm 3. B to be the set of individual corrupted gradient, which is not discarded. According to our definition, we have N = A ∪ B. AD to be the set of individual good gradient, which is discarded, AR to be the set of individual good gradient, which is replaced by corrupted data. We have G = A∪AD∪AR. BD is the set of individual corrupted gradient, which is discarded by our algorithm. Denote the good gradient to be gi = αiWi, and the bad gradient to be g̃i = δiWi, according to our assumption, we have ‖Wi‖op ≤ C.
Now, we have the l2 norm error:
‖µ(G)− µ(N)‖ = ‖ 1 m m∑ i∈G gi −
( 1
n ∑ i∈A gi + 1 n ∑ i∈B g̃i
) ‖
= ‖ 1 n m∑ i=1 n m gi −
( 1
n ∑ i∈A gi + 1 n ∑ i∈B g̃i
) ‖
= ‖ 1 n ∑ i∈A n m gi + 1 n ∑ i∈AD n m gi + 1 n ∑ i∈AR n m gi −
( 1
n ∑ i∈A gi + 1 n ∑ i∈B g̃i
) ‖
= ‖ 1 n ∑ i∈A ( n−m m )gi + 1 n ∑ i∈AD n m gi + 1 n ∑ i∈AR n m gi − 1 n ∑ i∈B g̃i‖
≤ ‖ 1 n ∑ i∈A ( n−m m )gi + 1 n ∑ i∈AD n m gi + 1 n ∑ i∈AR n m gi‖+ ‖ 1 n ∑ i∈B g̃i‖ (1)
Let |A| = x, we have |B| = n−x = (1− )m−x, |AR| = m−n = m, |AD| = m−|A|−|AR| = m− x− (m− n) = n− x = (1− )m− x. Thus, we have:
‖µ(G)− µ(N)‖ ≤ ‖ ∑ A m− n nm gi + ∑ AD 1 m gi + ∑ AR 1 m gi‖+ ∑ B 1 n ‖g̃i‖
≤ ∑ A ‖m− n nm gi‖+ ∑ AD ‖ 1 m gi‖+ ∑ AR ‖ 1 m gi‖+ ∑ B 1 n ‖g̃i‖
For individual gradient, according to the label corruption gradient definition in problem 2, assuming the ‖W‖op ≤ C, we have ‖gi‖ ≤ ‖αi‖‖Wi‖op ≤ C‖αi‖. Also, denote maxi ‖αi‖ = k, maxi ‖δi‖ = v, we have ‖gi‖ ≤ Ck, ‖g̃i‖ ≤ Cv.
‖µ(G)− µ(N)‖ ≤ Cxm− n nm k + C(n− x) 1 m k + C(m− n) 1 m k + C(n− x) 1 n v
Note the above upper bound holds for any x, thus, we would like to get the minimum of the upper bound respect to x. Rearrange the term, we have
‖µ(G)− µ(N)‖ ≤ Cx(m− n nm − 1 m )k + Cn 1 m k + C(m− n) 1 m k + C(n− x) 1 n v
= C 1 m ( 2 − 1 1− )xk + Ck + Cv − 1 n Cxv
= Cx ( k(2 − 1) m(1− ) − v n ) + Ck + Cv
= Cx ( k(2 − 1)− v m(1− ) ) + Ck + Cv
Since when < 0.5, k(2 − 1)− v m(1− ) < 0, we knew that x should be as small as possible to continue the bound. According to our algorithm, we knew n −m = m(1 − ) −m = (1 − 2 )m ≤ x ≤ n = (1− )m. Then, substitute x = (1− 2 )m, we have
‖µ(G)− µ(N)‖ ≤ Ck(1− 2 )2 − 1 1− + Ck + Cv − Cv 1− 2 1−
= Ck 3 − 4 2
1− + Cv 1−
A.11 PROOF OF THEOREM 3
According to algorithm3, we could guarantee that v ≤ k, then, we will have:
‖µ(G)− µ(N)‖ ≤ Ck 3 − 4 2
1− + Cv 1−
≤ Ck 4 − 4 2 1− = 4 Ck = O( √q)(C is constant, k is the norm of q-dimensional vector)
A.12 COMPARISON BETWEEN SORTING LOSS LAYER GRADIENT NORM AND SORTING THE
LOSS VALUE
Assume we have a d class label y ∈ Rd, where yk = 1, yi = 0, i 6= k. We have two prediction p ∈ Rd, q ∈ Rd. Assume we have a d class label y ∈ Rd, where yk = 1, yi = 0, i 6= k. With little abuse of notation, suppose we have two prediction p ∈ Rd, q ∈ Rd. Without loss of generality, we could assume that p1 has smaller cross entropy loss, which indicates pk ≥ qk For MSE, assume we have opposite result
‖p− y‖2 ≥ ‖q− y‖2 ⇒ ∑ i 6=k p2i + (1− pk)2 ≥ ∑ i6=k q2i + (1− qk)2 (2)
For each pi, i 6= k, We have
V ar(pi) = E(p 2 i )− E(pi)2 =
1 d− 1 ∑ i 6=k p2i − 1 (d− 1)2 (1− pk)2 (3)
Then ∑ i 6=k p2i + (1− pk)2 ≥ ∑ i 6=k q2i + (1− qk)2
⇒V ari 6=k(pi) + d
(d− 1)2 (1− pk)2 ≥ V ari 6=k(qi) +
d
(d− 1)2 (1− qk)2
⇒V ari 6=k(pi)− V ari 6=k(qi) ≥ d (d− 1)2 ( (1− qk)2 − (1− pk)2 ) ⇒V ari 6=k(pi)− V ari 6=k(qi) ≥ d
(d− 1)2 ((pk − qk)(2− pk − qk))
(4) | 1. What is the main contribution of the paper in noisy label learning?
2. What are the strengths of the proposed algorithm, particularly in its ability to resist label noise?
3. What are the weaknesses of the paper regarding its assumptions, proofs, and symbol definitions?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Review | Review
pros
The authors provide an insight that in noisy label learning, if the corrupted gradient is not far from the true one, then the learning algorithm could converge a sub-optimal result.
Detailed experiments show the empirical evidence of the proposed algorithm over different kinds of label noise.
cons
The authors propose a method that only keeping the data with a small gradient norm in the training process to resist label noise. However, they do not verify that such a design is motivated by their theoretical results. Some important proofs for their key results are missing, e.g., Theorem 2 and Theorem 3, making this paper not self-contain.
Many symbols are not well-defined mathematically.
This paper proposes a robust algorithm for noisy label learning. By keeping the data with a small gradient norm in the training process, the proposed algorithm could resist the label noise. Instead of making assumptions on the label corruption, the authors assume that the difference between the clean mini-batch gradient and the corrupted mini-batch gradient is bounded. Thus the proposed method could converge to the
ϵ
-optimal results. By dropping the data with a large gradient norm, the estimated gradient mean will not be far from the true one. The theoretical results make sense, but there lack detailed proofs to make this paper self-contain, e.g., for Theorem 2 and Theorem 3. The empirical studies on several datasets show the robustness of the proposed algorithm over different kinds of label noise. |
ICLR | Title
Provable Robust Learning for Deep Neural Networks under Agnostic Corrupted Supervision
Abstract
Training deep neural models in the presence of corrupted supervisions is challenging as the corrupted data points may significantly impact the generalization performance. To alleviate this problem, we present an efficient robust algorithm that achieves strong guarantees without any assumption on the type of corruption, and provides a unified framework for both classification and regression problems. Different from many existing approaches that quantify the quality of individual data points (e.g., loss values) and filter out data points accordingly, the proposed algorithm focuses on controlling the collective impact of data points on the averaged gradient. Even when a corrupted data point failed to be excluded by the proposed algorithm, the data point will have very limited impact on the overall loss, as compared with state-of-the-art filtering data points based on loss values. Extensive empirical results on multiple benchmark datasets have demonstrated the robustness of the proposed method under different types of corruptions.
1 INTRODUCTION
Corrupted supervision is a common issue in real-world learning tasks, where the learning targets are not accurate due to various factors in the data collection process. In deep learning models, such corruptions are especially severe, whose degree-of-freedom makes them easily memorize corrected examples and susceptible to overfitting (Zhang et al., 2016).
There are extensive efforts to achieve robustness against corrupted supervisions. A natural approach to deal with corrupted supervision in deep neural networks (DNNs) is to reduce the model exposure to corrupted data points during training. By detecting and filtering (or re-weighting) the possible corrupted samples, the learning is expected to deliver a model that is similar to the one trained on clean data (without corruption) (Kumar et al., 2010; Han et al., 2018; Zheng et al., 2020). There are different criteria designed to identify the corrupted data points in training. For example, Kumar et al. (2010); Han et al. (2018); Jiang et al. (2018) leveraged the loss function values of data points; Zheng et al. (2020) tapped prediction uncertainty for filtering data; Malach & Shalev-Shwartz (2017) used the disagreement between two deep networks; Reed et al. (2014) utilized the prediction consistency of neighboring iterations. The success of these methods highly depends on the effectiveness of the detection criteria in correctly identifying the corrupted data points. Since the corrupted labels remain unknown throughout the learning, such “unsupervised” detection approaches may not be effective, either lack theoretical guarantees of robustness (Han et al., 2018; Reed et al., 2014; Malach & Shalev-Shwartz, 2017; Li et al., 2017) or provide guarantees under assumptions of the availability of prior knowledge about the type of corruption (Zheng et al., 2020; Shah et al., 2020; Patrini et al., 2017; Yi & Wu, 2019). Besides, another limitation of many existing approaches is that, they are exclusively designed for classification problems (e.g., Malach & Shalev-Shwartz (2017); Reed et al. (2014); Menon et al. (2019); Zheng et al. (2020)) and are not straightforward to extend to solve regression problems.
To tackle these challenges, this paper presents a unified optimization framework with robustness guarantees without any assumptions on how supervisions are corrupted, and is applicable to both classification and regression problems. Instead of developing an accurate criterion for detection corrupted samples, we adopt a novel perspective and focus on limiting the collective impact of corrupted samples during the learning process through robust mean estimation of gradients. Specifically, if our estimated average gradient is close to the gradient from the clean data during the learning iterations,
then the final model will be close to the model trained on clean data. As such, a corrupted data point can still be used during the training when it does not considerably alter the averaged gradient. This observation has remarkably impact on our algorithm design: instead of explicitly quantifying (and identifying) individual corrupted data points, which is a hard problem in itself, we are now dealing with an easier task, i.e., eliminating training data points that significantly distort the mean gradient estimation. One immediate consequence of this design is that, even when a corrupted data point failed to be excluded by the proposed algorithm, the data point is likely to have very limited impact on the overall loss, as compared with state-of-the-art filtering data points based on loss values. We perform experiments on both regression and classification with corrupted supervision on multiple benchmark datasets. The results show that the proposed method outperforms state-of-the-art.
2 BACKGROUND
Learning from corrupted data (Huber, 1992) has attracted considerable attention in the machine learning community (Natarajan et al., 2013). Many recent studies have investigated robustness of classification tasks with noisy labels. For example, Kumar et al. (2010) proposed a self-paced learning (SPL) approach, which assigns higher weights to examples with smaller loss. A similar idea was used in curriculum learning (Bengio et al., 2009), in which the model learns easy samples first before learning harder ones. Alternative methods inspired by SPL include learning the data weights (Jiang et al., 2018) and collaborative learning (Han et al., 2018; Yu et al., 2019). Label correction (Patrini et al., 2017; Li et al., 2017; Yi & Wu, 2019) is another approach, which revises original labels in data with a goal to recover clean labels from corrupt ones. However, since we do not have access to which data points are corrupted, it is hard to get provable guarantees for label correction without strong assumptions regarding the corruption type.
Accurate estimation of gradients is a key step for successful optimization. The relationship between gradient estimation and its final convergence has been widely studied in the optimization community. Since computing an approximated (and potentially biased) gradient is often more efficient than computing the exact gradient, many studies used approximated gradients to optimize their models and showed that they suffer from the biased estimation problem if there is no assumptions on the gradient estimation (d’Aspremont, 2008; Schmidt et al., 2011; Bernstein et al., 2018; Hu et al., 2020; Ajalloeian & Stich, 2020).
A closely related topic is robust estimation of the mean. Given corrupted data, robust mean estimation aims at generating an estimated mean µ̂ such that the difference between the estimated mean on corrupted data and the mean of clean data ‖µ̂− µ‖2 is minimized. It was showed that median or trimmed-mean are the optimal statistics for mean estimation in one-dimensional data (Huber, 1992). However, robustness in high dimension is quite challenging since applying the coordinate-wise optimal robust estimator would lead to an error factor O( √ d) that scales with the data dimension. Although some classical work, such as Tukey median (Tukey, 1975), successfully designed algorithms to get rid of the O( √ d) error, the algorithms themselves are not polynomial-time algorithm. More recently, Diakonikolas et al. (2016); Lai et al. (2016) successfully designed polynomial-time algorithms with dimension-free error bounds. The results have been widely applied to improve algorithmic efficiency in various scenarios (Dong et al., 2019; Cheng et al., 2020).
Robust optimization aims to optimize the model given corrupted data. Many previous studies improve the robustness of the optimization in different problem settings. However, most of them either study linear regression and its variantes(Bhatia et al., 2015; 2017; Shen & Sanghavi, 2019) or study the convex optimization (Prasad et al., 2018). Thus, those results cannot be directly generalized to deep neural networks. Diakonikolas et al. (2019) is a very generalized non-convex optimization method with the agnostic corruption guarantee. However, the space complexity of the algorithm is high, thus cannot be applied to deep neural networks given current hardware limitations.
3 METHODOLOGY
Before introducing our algorithm, we first discuss the corrupted supervision. To characterize agnostic corruptions, we make use of an adversary that tries to corrupt the supervision of a clean data. There is no limitation on how the adversary corrupts the supervision, which can either be randomly permuting the target, or in a way that maximizes the negative impact (i.e., lower performance).
Firstly, the adversary can choose up to fraction of the clean target Dy ∈ Rn×q and change the selected row of Dy to arbitrary valid numbers, generating D y ∈ Rn×q . Then, the adversary returns the corrupted dataset Dx, D y to our learning algorithmA. In this process, the only constraint on the adversary is the fraction, and the adversary has full knowledge of the data, and even the learning algorithm A. A natural question to ask is: Given a data set with -fraction corrupted supervision Dx ∈ Rn×p, D y , and a learning objective φ : Rp × Rq × Rd → R parameterized by θ, can we output parameters θ ∈ Rd such that ‖∇θφ(θ;Dx,Dy)‖ is minimized. When = 0, we have D y = Dy and learning is done on the clean data. The stochastic gradient descent could converge to a stationary point, where ‖∇θφ(θ;Dx,Dy)‖ = 0. However, when the supervision is corrupted as above, this is not the case any more, due to the error in θ impacted by the corrupted data. We thus want an efficient algorithm to find a model θ that minimizes ‖∇θφ(θ;Dx,Dy)‖. A robust model θ should have a small value of ‖∇θφ(θ;Dx,Dy)‖, and we hypothesize that a smaller ‖∇θφ(θ;Dx,Dy)‖ has better generalization.
3.1 STOCHASTIC GRADIENT DESCENT WITH BIASED GRADIENT
A direct consequence of corrupted supervision is biased gradient estimation. In this section, we will first analyze how such biased gradient estimation affects the robustness of learning. The classical analysis of stochastic gradient descent (SGD) requires access to the stochastic gradient oracle, which is an unbiased estimation of the true gradient. However, corrupted supervision leads to corrupted gradients, and it is thus difficult to get unbiased gradient estimation without assumptions of how the gradients are corrupted. We start the analysis by the following informal theorem (without elaborated discussions of assumptions) of how biased gradient affects the final convergence of SGD. Its formal version is provided in Theorem 4, Appendix.
Theorem 1 (Convergence of Biased SGD (Informal)) Under mild assumptions, denote ζ to be the maximum `2 norm of the difference between clean minibatch gradient and corrupted minibatch gradient ‖g− g̃‖ ≤ ζ, then by using biased gradient estimation, SGD converges to the ζ-approximated stationary points: E‖∇φ(θt)‖2 = O(ζ2). Remark 1 In the corrupted supervision setting, let the gradient estimated by corrupted data D be ĝ, the gradient estimated by clean data D be g. Assume ‖g̃ − g‖ ≤ ζ, it follows that when using corrupted dataset in SGD, it converges to the ζ-approximated stationary point of the objective defined by the clean data. Note the difference between above theorem and typical convergence theorem is that we are using a biased gradient estimation.
According to Theorem 1 and the remark, a robust estimation of the gradient g is the key to ensure a robust model (converge to the clean solution). We also assume the loss function has the form of L(y, ŷ), where many commonly used loss functions fall in this category.
3.2 ROBUST GRADIENT ESTIMATION FOR GENERAL DATA CORRUPTION
We first introduce Algo. 2 for general corruption (i.e. corruption on both features and/or supervisions). The algorithm excludes the data points with large gradient norms, and uses the empirical mean of the remaining to update gradients. In Thm. 2 we give its robustness property.
Algorithm 1: Robust Mean Estimation for Corrupted Gradient input: gradient matrix G ∈ m× d, corruption rate return estimated mean µ̂ ∈ Rd ; 1. For each row zi in G, calculate the l2 norm ‖zi‖ 2. Choose the -fraction rows with large ‖zi‖ 3. Remove those selected rows, and return the empirical mean of the rest points as µ̂.
Assumption 1 (Individual L-smooth loss) For every individual loss function φi, there exists constant L > 0, such that for a clean sample i, we have |φi(x) − φi(y)| ≤ L|x − y| for any x,y.
Theorem 2 (Robust Gradient Estimation For Data Corruption) Let G̃ ∈ Rm×d be a corrupted gradient matrix, and G ∈ Rm×d be the clean gradient matrix. Let µ be the empirical mean function,
Algorithm 2: (PRL(G)) Provable Robust Learning for General Corrupted Data input: Label corrupted dataset Dx,D y , learning rate γt; return model parameter θ; for t = 1 to maxiter do
Randomly sample a minibatch M from Dx,D y Calculate the individual gradient G̃ for M Apply Algorithm 1 on G̃ to get robust gradient estimation µ̂ Update model θt+1 = θt − γtµ̂
end we have that the output of Algo. 1 µ̂ of G̃ satisfies ‖µ(G) − µ̂‖ = O( √ d). Moreover, if Asm. 1 holds, we further have ‖µ(G)− µ̂‖ = O( L).
Combining with the aforementioned convergence analysis of biased SGD, we get the following:
Corollary 1 (Robust Optimization For Corrupted Data) Given assumptions used in Thm. 1, and Asm. 1, applying Algo. 1 to any -fraction corrupted data, we get mint∈[T ] E‖∇φ(xt)‖ = O( L) with large enough T . If Asm. 1 does not hold, then we get mint∈[T ] E‖∇φ(xt)‖ = O( √ d) with large enough T .
The robustness guarantee states that even training on generally corrupted data (corrupted supervision is a special case), Algo. 2 guarantee that the gradient norm on remaining data cannot be too large. Since Thm. 2 gives a dimension-free error bound when Asm. 1 holds, Corollary 1 also gives the dimension-free robustness guarantee with Asm. 1. We defer the detailed discussion ofO( L) to later sections. Although the error bound O( L) sounds good, we note that it still has several drawbacks: First, the dimension-free error bound means the error does not grow with increasing dimensions, and is critical when working with neural networks, due to the extremely large gradient dimension (i.e., #parameters of neural network). Thm. 2 gives the dimension-free error bound only when Asm. 1 holds, which is quite strong. In addition, even when Asm. 1 holds, L can be large, leading to a large gradient estimation error. Existing work (Diakonikolas et al., 2019) already acheives the dimensionfreeO( √ ) guarantee with general corruptions, which is a much more better theoretical results than above theorem. However, in practice, we found that the gradient norms of deep neural networks for individual data points are usually not very large, even at the beginning of the training. This can be partially due to the network structure. Further discussion on this issue is beyond the scope of this paper, but the theoretical bound above states that the robustness should depend on the number of parameters for the general models.
Another concern of Alg. 2 is the efficiency. It requires computing individual gradients. Although there are some advanced approaches to get the individual gradient, e.g., (Goodfellow, 2015), it is still relatively slow as compared to commonly used back-propagation. Moreover, these methods are usually not compatible with popular components such as batch normalization (BN) since the individual gradients are not independent inside BN, using of which will lose the benefits from parallelization.
3.3 ROBUST GRADIENT ESTIMATION FOR ONE DIMENSIONAL CORRUPTED SUPERVISION
In this section, we show that the above robustness bound can be improved if we assume the corruption only comes from supervision. Also, by fully exploiting the gradient structure of the corrupted supervision, our algorithm can be much more efficient and meanwhile compatible with batch normalization. We use the one dimensional supervision setting (binary classification or single-target regression) to illustrate this intuition and extend it more general settings in the next section. Consider a high-dimensional supervised learning problem with X ∈ Rn×p and y ∈ Rn. The goal is to learn a function f parameterized by θ ∈ Rd minimizing the following loss minθ ∑n i=1 φi =
minθ ∑n i=1 L(yi, f(xi, θ)). The gradient for a data point i is ∇θφi = ∂li ∂fi ∂fi ∂θ = αigi.
One key observation is that: when only supervision is corrupted, then the corruption contributes only to the term αi = ∂li∂fi , which is a scalar in the one-dimensional setting. In other words, given the clean gradient of ith point, gi ∈ Rd, the corrupted supervision can only perturbs the the length of the gradient vector, changing the gradient from αigi to δigi, where δi =
∂l i ∂fi . When αi and δi are
known, then we can easily eliminate the impact from corrupted supervision. But this is not the case since we have have only the possibly corrupted target ŷi as opposed to the ground truth yi.
On the other hand, the fact that corrupted supervision scales the clean gradient can be used to reshape the robust optimization problem. Recall that in every iteration, we update our model by θ+ = θ− γµ(G), where µ denotes the empirical mean function and G = [∇θφT1 , . . . ,∇θφTm] ∈ Rm×d is the gradient matrix with mini-batch size m. We then have the following:
Problem 1 (Robust Gradient Estimation for Corrupted Supervision - One Dimensional Case) Given a clean gradient matrix G ∈ Rm×d, an -corrupted matrix G̃ with at most -fraction rows are corrupted from αigi to δigi, design an algorithmA : Rm×d → Rd that minimizes ‖µ(G)−A(G̃)‖.
Note that when ‖δi‖ is large, the corrupted gradient will have large effect on the empirical mean, and vice versa. This motivates us to develop an algorithm that filters out data points by the loss layer gradient ‖ ∂li∂fi ‖. If the norm of the loss layer gradient of a data point is large (in one-dimensional case, this gradient reduces to a scalar and the norm becomes its absolute value), we exclude the data point when computing the empirical mean of gradients for this iteration. Note that this algorithm is applicable to both regression and classification problems. Especially, when using the mean squared error (MSE) loss for regression, its gradient norm is exactly the loss itself, and the algorithm reduces to self-paced learning Kumar et al. (2010). We summarize the procedure in Alg. 3 and extend it to the more general multi-dimension case in the next section.
Algorithm 3: (PRL(L)) Efficient Provable Robust Learning for Corrupted Supervision input: dataset Dx,D y with corrupted supervision, learning rate γt; return model parameter θ; for t = 1 to maxiter do
Randomly sample a minibatch M from Dx,D y Compute the predicted label Ŷ from M Calculate the gradient norm for the loss layer, (i.e. ‖ŷ − y‖ for mean square error or cross entropy)
for each data point in M Remove the top τ -fraction data from M according to ‖ŷ − y‖ Return the empirical mean of the remaining M as the robust mean estimation µ̂ Update model θt+1 = θt − γtµ̂
end
3.4 EXTENSION TO MULTI-DIMENSIONAL CORRUPTED SUPERVISION
To extend our algorithm and analysis to multi-dimensional case, let q to be the supervision dimension, the gradient for each data point is ∇θφi = ∂li∂fi ∂fi ∂θ , where ∂li ∂fi ∈ Rq is the gradient of loss respect to model outputs, and ∂fi∂θ ∈ R q×d is the gradient of model outputs respect to model parameters. Similarly, when the supervision is corrupted, the corruption comes from the term ∂li∂fi , which is a vector. Let δi = ∂l i ∂fi ∈ Rq , αi = ∂li∂fi ∈ R q , Wi = ∂fi∂θ ∈ R q×d, m be the minibatch size. Denote the clean gradient matrix G ∈ Rm×d, where the ith row of gradient matrix gi = αiWi. Now the multi-dimensional robust gradient estimation problem is defined by:
Problem 2 (Robust Gradient Estimation for Corrupted Supervision - Multi-Dimensional Case) Given a clean gradient matrix G, an -corrupted matrix G̃ with at most -fraction rows are corrupted from αiWi to δiWi, design an algorithmA : Rm×d → Rd that minimizes ‖µ(G)−A(G̃)‖.
We start our analysis by investigating the effects of the filtering-base algorithm, i.e. use the empirical mean gradient of (1 − )-fraction subset to estimate the empirical mean gradient of clean gradient matrix. We have the following for a randomized filtering-based algorithm(proof in Appendix):
Lemma 1 (Gradient Estimation Error for Random Dropping -fraction Data) Let G̃ ∈ Rm×d be a corrupted matrix generated as in Problem 2, and G ∈ Rm×d be the original clean gradient matrix. Suppose arbitrary (1− )-fraction rows are selected from G̃ to form the matrix N ∈ Rn×d. Let µ be the empirical mean function. Assume the clean gradient before loss layer has bounded
operator norm, i.e., ‖W‖op ≤ C, the maximum clean gradient in loss layer maxi∈G ‖αi‖ = k, the maximum corrupted gradient in loss layer maxi∈N ‖δi‖ = v, then we have:
‖µ(G)− µ(N)‖ ≤ Ck 3 − 4 2
1− + Cv 1− .
We see that v is the only term that is related to the corrupted supervision. If v is large, then the bound is not safe since the right-hand side can be arbitrarily large (i.e. an adversary can change the label in a way such that v is extremely large). Thus controlling the magnitude of v provides a way to effectively reduce the bound. For example, if we manage to control v ≤ k, then the bound is safe. This can be achieved by sorting the gradient norms at the loss layer, and then discarding the largest -fraction data points. We thus have the following result.
Theorem 3 (Robust Gradient Estimation For Supervision Corruption) Let G̃ be a corrupted matrix generated in Problem 2, q be the label dimension, µ be the empirical mean of clean matrix G. Assume the maximum clean gradient before loss layer has bounded operator norm: ‖W‖op ≤ C, then the output of gradient estimation in Algo 3 µ̂ satisfies ‖µ− µ̂‖ = O( √q) ≈ O( ).
Compare Thm. 2 and Thm. 3, we see that when the corruption only comes from supervision, the dependence on d is reduced to q, where in most deep learning cases we have d n. Applying Thm 1 directly shows that our algorithm is also robust in multi-label settings.
3.5 COMPARISON WITH DIAKONIKOLAS ET AL. (2019) AND OTHER METHODS
SEVER (Diakonikolas et al., 2019) showed promising state-of-the-art theoretical results in general corruptions, which achievesO( √ ) dimension-free guarantee for general corruptions. Compared to Diakonikolas et al. (2019), we have two contributions: a). By assuming the corruption comes from the label (we admit that this is quite strong compared to the general corruption setting), we could get a better error rate. b). Our algorithm can be scaled to deep neural networks while Diakonikolas et al. (2019) cannot. We think this is a contribution considering the DNN based models are currently state-of-the-art methods for noisy label learning problems (at least in empirical performance).
Although Diakonikolas et al. (2019) achieves very nice theoretical results, unfortunately, it cannot be applied to DNN with the current best hardware configuration. Diakonikolas et al. (2019) uses dimension-free robust mean estimation breakthroughs to design the learning algorithm, while we notice that most robust mean estimation relies on filtering out data by computing the score of projection to the maximum singular vector. For example, in Diakonikolas et al. (2019), it requires performing SVD on n×d individual gradient matrix, where n is the sample size and d is the number of parameters. This method works well for small datasets and small models since both n and d is small enough for current memory limitation. However, for deep neural networks, this matrix size is far beyond current GPU memory capability. That could be the potential reason why in Diakonikolas et al. (2019), only ridge regression and SVM results for small data are shown (we are not saying that they should provide DNN results). In our experiment, our n is 60000 and d is in the magnitude of millions (network parameters). It is impractical to store 60000 copies of neural networks in a single GPU card. In contrast, in our algorithm, we do not need to store the full gradient matrix. By only considering the loss-layer gradient norm, we can easily extend our algorithm to DNN, and we showed that this simple strategy works well in both theory and challenging empirical tasks.
We notice that there are some linear (Bhatia et al., 2015; 2017) or convex method (Prasad et al., 2018) achieves the better robustness guarantee. However, most of them cannot be directly applied to deep neural networks.
4 RELATIONSHIP TO SELF-PACED LEARNING (SPL)
SPL looks very similar to our method at first glance. Instead of keeping data point with small gradient norm, SPL tries to keep data with small loss. The gradient norm and loss function can be tied by the famous Polyak-Łojasiewicz (PL) condition. The PL condition assumes there exists some constant s > 0 such that 12‖∇φ(x)‖
2 ≥ s (φ(x)− φ∗) , ∀x holds. As we can see, when the neural network is highly over-parameterized, the φ∗ can be assumed to be equal across different
samples since neural networks can achieve 0 training loss (Zhang et al., 2016). By sorting the error φ(xi) for every data point, SPL actually is sorting the lower bound of the gradient norm if the PL condition holds. However, the ranking of gradient norm and the ranking loss can be very different since there is no guarantee that the gradient norm is monotonically increasing with the loss value. We provide illustration of why SPL is not robust from geometric perspective in the appendix. Here we show even for simple square loss, the monotonic relationship is easy to break. One easy counter-example is φ(x1, x2) = 0.5x21 + 50x 2 2. Take two points (1000, 1) and (495, - 49.5), we will find the monotonic relationship does not hold for these two points. Nocedal et al. (2002) showed that the monotonic relationship holds for square loss (i.e.φ(x) = 12 (x−x
∗)TQ(x− x∗) ) if the condition number of Q is smaller than 3 + 2 √ 2, which is a quite strong assumption especially when x is in high-dimension. If we consider the more general type of loss function (i.e. neural network), the assumptions on condition number should only be stronger, thus breaking the monotonic relationship. Thus, although SPL sorts the lower bound of the gradient norm under mild assumptions, our algorithm is significantly different from the proposed SPL and its variations.
Now, we discuss the relationship between SPL and algorithm 3 under supervision corruptions. SPL has the same form as algorithm 3 when we are using mean square error to perform regression tasks since the loss layer gradient norm is equal to loss itself. However, in classification, algorithm 3 is different from the SPL. In order to better understand the algorithm, we further analyze the difference between SPL and our algorithm for cross-entropy loss.
For cross entropy, denote the output logit as o, we have H(yi, fi) = −〈yi, log(softmax(oi))〉 = −〈yi, log(fi)〉. The gradient norm of cross entropy w.r.t. oi is: ∂Hi ∂oi
= yi− softmax(oi) = fi−yi. Thus, the gradient of loss layer is the MSE between yi and fi. Next, we investigate when MSE and Cross Entropy gives non-monotonic relationship. For the sake of simplification, we only study the sufficient condition of the non-monotonic relationship, which is showed in lemma 2.
Lemma 2 Let y ∈ Rq , where yk = 1,yi = 0 for i 6= k, and α, β are two q dimensional vector in probability simplex. Without loss of generality, suppose α has smaller cross entropy loss αk ≥ βk, then the sufficient condition for ‖α − y‖ ≥ ‖β − y‖ is Vari 6=k({αi}) − Vari 6=k({βi}) ≥
q (q−1)2 ((αk − βk)(2− αk − βk))
As αk ≥ βk, the right term is non-negative. In conclusion, when MSE generates a different result from cross-entropy, the variance of the probability of the non-true class of the discarded data point is larger. Suppose we have a ground-truth vector y = [0, 1, 0, 0, 0, 0, 0, 0, 0, 0], and we have two predictions α = [0.08, 0.28, 0.08, 0.08, 0.08, 0.08, 0.08, 0.08, 0.08, 0.08] and β = [0.1, 0.3, 0.34, 0.05, 0.05, 0.1, 0.03, 0.03, 0, 0]. The prediction α have a smaller mse loss while prediction β have a smaller cross-entropy loss. It is intuitive that β is more likely to be noisy data since it has two peak on the prediction (i.e. 0.3, 0.34). However, since cross entropy loss only considers one dimension, it cannot detect such situation. Compared to the cross-entropy, the gradient (mse loss) considers all dimension, and thus will consider the overall prediction distributions.
5 COMBINING WITH CO-TEACHING STYLE TRAINING
Motivated by co-teaching (Han et al., 2018), which is one of currently state-of-the-art deep methods for learning under noisy label, we propose Co-PRL(L), which has the same framework of coteaching but uses the loss-layer gradient to select the data. The full algorithm is shown in algorithm 4 in the appendix. The meaning of all hyper-parameters in algorithm 4 are all the same as in the original Han et al. (2018). Compared with algorithm 3, except sampling data according to the loss layer gradient norm, the Co-PRL(L) has two other modules. The first is we gradually increase the amount of the data to be dropped. The second is that two networks will exchange the selected data to update their own parameters.
6 EXPERIMENT
In this section, we perform experiments on benchmark regression and classification dataset. The code is available in supplementary materials of submission. We compare PRL(G)(Algo. 2), PRL(L)
(Algo. 3), and Co-PRL(L) (Algo. 4) to the following baselines. Standard: standard training without filtering data (mse for regression, cross entropy for classification); Normclip: standard training with norm clipping; Huber: standard training with huber loss (regression only); Decouple: decoupling network, update two networks by using their disagreement (Malach & Shalev-Shwartz, 2017) (classification only); Bootstrap: It uses a weighted combination of predicted and original labels as the correct labels, and then perform back propagation (Reed et al., 2014) (classification only); Min-sgd: choosing the smallest loss sample in minibatch to update model (Shah et al., 2020); SPL: self-paced learning, dropping the data with large losses (same as PRL(L) in regression setting with MSE loss); Ignormclip: clipping individual gradient then average them to update model (regression only); Co-teaching: collaboratively train a pair of SPL model and exchange selected data to another model(Han et al., 2018) (classification only); It is hard to design experiments for agnostic corrupted supervision and we tried our best to include different types of supervision noise. The supervision corruption settings are as follows: linadv: the corrupted supervision is generated by random wrong linear relationship of features (regression); signflip: the supervision sign is flipped (regression); uninoise: random sampling from uniform distribution as corrupted supervision (regression); mixture: mixture of above types of corruptions (regression); pairflip: shuffle the coordinates (i.e. eyes to mouth in celebA or cat to dog in CIFAR) (regression and classification); symmetric: randomly assign wrong class label (classification). For classification, we use classification accuracy as the evaluation metric, and R-square is used to evaluate regression experiments. Due to the limit of the space, we only show the average evaluation score on testing data for the last 10 epochs. The whole training curves are attached in the appendix. All experiments are repeated 5 times for regression experiments and 3 times for classification experiments.Main hyperparameters are showed in appendix.
6.1 REGRESSION EXPERIMENT
We use CelebA data to perform regression tasks. CelebA dataset has 162770 training images, 19867 validation images, 19962 test images. The target variable is ten-dimensional coordinates of the left eye, right eye, nose, left mouth, and right mouth. Given the human face image, the goal is to predict 10 face landmark coordinates in the image. We tried adding different types of noise on the landmark coordinates. We preprocess the CelebA data as following: we use three-layer CNN to train 162770 training images to predict clean coordinates (we use 19867 validation images to do the early stopping). Then, we use well-trained network to extract the 512-dimensional feature on testing sets. Thus, our final data to perform experiment has feature sets X ∈ R19962×512, and the target variable Y ∈ R19962×10. We further split the data to the training and testing set, where training sets contain 80% of the data. Then, we manually add linadv, signflip, uninoise, pairflip, mixture types of supervision noise on the target variable on training data. The corruption rate for all types of corruption is varied from 0.1 to 0.4. We use 3-layer fully connect networks in experiments. The results of averaged last 10 epoch r-square are in table 1.
6.2 CLASSIFICATION EXPERIMENT
We perform experiments on CIFAR10, and CIFAR100 to illustrate the effectiveness of our algorithm in classification setting. We use the 9-layer Convolutional Neural Network, which is the same as Han et al. (2018). Since most baselines include batch normalization, it is difficult to get individual gradient efficiently, we will drop the ignormclip and PRL baselines. In the appendix, we attached the results if both co-teaching and Co-PRL(L) drops batch normalization module. We will see that coteaching cannot maintain robustness while our method still has robustness. The reason is discussed in the appendix. We consider pairflip and symmetric supervision corruptions in experiments. Also, to compare with the current state of the art method, for symmetric noise, we use corruption rate which beyond 0.5. Although our theoretical analysis assumes the noise rate is small than 0.5, when the noise type is not adversary (i.e. symmetric), we empirically show that our method can also deal with such type of noise. Results on CIFAR10, CIFAR100 are in Table 2. As we can see, no matter using one network (PRL vs SPL) or two networks (Co-PRL(L) vs Co-teaching), our method performs significantly better. Since in real-world problems, it is hard to know that the ground-truth corruption rate, we also perform the sensitivity analysis in classification tasks to show the effect of overestimating and underestimating . The results are in Table 3. More discussion about sensitivity analysis can be found in appendix.
7 CONCLUSION
In this paper, we proposed efficient algorithm to defense against agnostic supervision corruptions. Both theoratical and empirical analysis showed the effectiveness of our algorithm. There are two remaining questions in this paper which deserves study in future. The first one is whether we can further improve O( ) error bound or show that O( ) is tight. The second one is to utilize more properties of neural networks, such as the sparse gradient, to see whether it is possible to get better algorithms.
A APPENDIX
A.1 CO-IGFILTER ALGORITHM
See algorithm 4.
Algorithm 4: Co-PRL(L) input: initialize wf and wg , learning rate η, fixed τ , epoch Tk and Tmax, iterations Nmax return model parameter wf and wg; for T = 1, 2, ..., Tmax do
for N = 1, ..., Nmax do random sample a minibatch M from Dx,D y (noisy dataset) get the predicted label Ŷf and Ŷg from M by wf . wg calculate the individual loss lf = L(Y, Ŷf ), lg = L(Y, Ŷg) calculate the gradient norm of loss layer scoref = ‖
∂lf ∂ŷf ‖, scoreg = ‖ ∂lg ∂ŷg ‖.
sample R(T )% small-loss-layer-gradient-norm instances by scoref and scoreg to get Nf , Ng update wf = wf − η∇wfL(Nf , wf ), wg = wg − η∇wgL(Ng, wg) (selected dataset) update model xt+1 = xt − γtµ̂
end Update R(T ) = 1−min { T
Tk τ, τ } end
A.2 FURTHER ILLUSTRATION OF THE DIFFERENCE BETWEEN SPL AND PRL(G)
In this section, we will further illustrate the difference between SPL and PRL(G). In order to have a more intuitive understanding of our algorithm, we could look at the Figure 1a and 1b. Since we are in the agnostic label corruption setting, it is difficult to filtering out the correct corrupted data. We showed two situations when loss filtering failed and gradient filtering failed. As we could see that when loss filtering method failed, the remaining corrupted data could have large impact on the overall loss surface while when gradient filtering method failed, the remaining corrupted data only have limited impact on the overall loss surface, thus gaining robustness.
A.3 NETWORKS AND HYPERPARAMETERS
The hyperparameters are in Table 4. For Classification, we use the same hyperparameters in Han et al. (2018). For CelebA, we use 3-layer fully connected network with 256 hidden nodes in hidden layer and leakly-relu as activation function. We also attached our code in supplementary materials.
A.4 REGRESSION R2 ON TESTING DATA CURVE
The curve for CelebA data is showed in Figure 2.
A.5 CLASSIFICATION CURVE
The classification curve is in Figure 3
A.6 SENSITIVITY ANALYSIS
Since in real-world problems, it is hard to know that the ground-truth corruption rate, we perform the sensitivity analysis in classification tasks to show the effect of . The results are in Table 5. As we could see, the performance is stable if we overestimate the corruption rate, this is because only
when we overestimate the , we could guarantee that the gradient norm of the remaining set is small. However, when we underestimate the corruption rate, in the worst case, there is no guarantee that the gradient norm of the remaining set is small. By using the empirical mean, even one large bad individual gradient would ruin the gradient estimation, and according to the convergence analysis of biased gradient descent, the final solution could be very bad in terms of clean data. That explains why to underestimate the corruption rate gives bad results. Also, from Table 5, we could see that using the ground truth corruption rate will lead to small uncertainty.
A.7 EMPIRICAL RESULTS ON RUNNING TIME
As we claimed in paper, the algorithm 2 is not efficient. In here we attached the execution time for one epoch for three different methods: Standard, PRL(G), PRL(L). For fair comparison, we replace all batch normalization module to group normalization for this comparison, since it is hard
to calculate individual gradient when using batch normalization. For PRL(G), we use opacus libarary (https://opacus.ai/) to calculate the individual gradient.
The results are showed in Table 6
A.8 PROOF OF CONVERGENCE OF BIASED SGD
We gave the proof of the theorem of how biased gradient affect the final convergence of SGD. We introduce several assumptions and definition first:
Assumption 2 (L-smoothness) The function φ: Rd → R is differentiable and there exists a constant L > 0 such that for all θ1, θ2 ∈ Rd, we have φ(θ2) ≤ φ(θ1)+〈∇φ(θ1), θ2−θ1〉+ L2 ‖θ2−θ1‖ 2
Definition 1 (Biased gradient oracle) A map g : Rd × D → Rd, such that g(θ, ξ) = ∇φ(θ) + b(θ, ξ) + n(θ, ξ) for a bias b : Rd → Rd and zero-mean noise n : Rd × D → Rd, that is Eξn(θ, ξ) = 0.
Compared to standard stochastic gradient oracle, the above definition introduces the bias term b. In noisy-label settings, the b is generated by the data with corrupted labels.
Assumption 3 (σ-Bounded noise) There exists constants σ > 0, such that Eξ‖n(θ, ξ)‖2 ≤ σ, ∀θ ∈ Rd
Assumption 4 (ζ-Bounded bias) There exists constants ζ > 0, such that for any ξ, we have ‖b(θ, ξ)‖2 ≤ ζ2, ∀θ ∈ Rd
For simplicity, assume the learning rate is constant γ, then in every iteration, the biased SGD performs update θt+1 ← θt − γtg(θt, ξ). Then the following theorem showed the gradient norm convergence with biased SGD.
Theorem 4 (Convergence of Biased SGD(formal)) Under assumptions 2, 3, 4, define F =
φ(θ0)− φ∗and step size γ = min
{ 1
L , (
√ LF
σT )
} , denote the desired accuracy as k, then
T = O ( 1
k + σ2 k2 ) iterations are sufficient to obtain mint∈[T ] E‖∇φ(θt)‖2 = O(k + ζ2).
Remark 2 Let k = ζ2, T = O (
1 ζ2 +
σ2 ζ4 ) iterations is sufficient to get mint∈[T ] E‖∇φ(θt)‖2 =
O(ζ2), and performing more iterations does not improve the accuracy in terms of convergence.
Since this is a standard results, similar results are showed in Bernstein et al. (2018); Devolder et al. (2014); Hu et al. (2020); Ajalloeian & Stich (2020). we provide the proof here. Proof: by L-smooth, we have:
φ(θ2) ≤ φ(θ1) + 〈∇φ(θ1), θ2 − θ1〉+ L
2 ‖θ2 − θ1‖2
by using γ ≤ 1 L , we have Eφ (θ1t+1) ≤ φ (θ1t)− γ 〈∇φ (θ1t) ,Egt〉+ γ2L
2
( E ‖gt − Egt‖2 + E ‖Egt‖2 ) = φ (θ1t)− γ 〈∇φ (θ1t) ,∇φ (θ1t) + bt〉+ γ2L
2
( E ‖nt‖2 + E ‖∇φ (θ1t) + bt‖ 2 )
≤ φ (θ1t) + γ
2
( −2 〈∇φ (θ1t) ,∇φ (θ1t) + bt〉+ ‖∇φ (θ1t) + bt‖ 2 ) + γ2L
2 E ‖nt‖2
= φ (θ1t) + γ
2
( −‖∇φ (θ1t)‖ 2 + ‖bt‖2 ) + γ2L
2 E ‖nt‖2
Since we have ‖bt‖2 ≤ ζ2, ‖nt‖2 ≤ σ2, by plug in the learning rate constraint, we have
Eφ (θ1t+1) ≤ φ (θ1t)− γ
2 ‖∇φ (θ1t)‖
2 + γ
2 ζ2 +
γ2L
2 σ2
Eφ (θ1t+1)− φ (θ1t) ≤ − γ
2 ‖∇φ (θ1t)‖
2 + γ
2 ζ2 +
γ2L
2 σ2
Then, removing the gradient norm to left hand side, and sum it across different iterations, we could get
1
2T T−1∑ t=0 E‖φ (θ1t) ‖ ≤ F Tγ + ζ2 2 + γLσ2 2
Take the minimum respect to t and substitute the learning rate condition will directly get the results.
A.9 PROOF OF THEOREM 2
Denote G̃ to be the set of corrupted minibatch, G to be the set of original clean minibatch and we have |G| = |G̃| = m. Let N to be the set of remaining data and according to our algorithm, the remaining data has the size |N| = n = (1 − )m. Define A to be the set of individual clean gradient, which is not discarded by algorithm 1. B to be the set of individual corrupted gradient, which is not discarded. According to our definition, we have N = A ∪ B. AD to be the set of individual good gradient, which is discarded, AR to be the set of individual good gradient, which is replaced by corrupted data. We have G = A∪AD∪AR. BD is the set of individual corrupted gradient, which is discarded by our algorithm. Denote the good gradient to be gi = αiWi, and the bad gradient to be g̃i, according to our assumption, we have ‖g̃i‖ ≤ L. Now, we have the l2 norm error:
‖µ(G)− µ(N)‖ = ‖ 1 m m∑ i∈G gi −
( 1
n ∑ i∈A gi + 1 n ∑ i∈B g̃i
) ‖
= ‖ 1 n m∑ i=1 n m gi −
( 1
n ∑ i∈A gi + 1 n ∑ i∈B g̃i
) ‖
= ‖ 1 n ∑ i∈A n m gi + 1 n ∑ i∈AD n m gi + 1 n ∑ i∈AR n m gi −
( 1
n ∑ i∈A gi + 1 n ∑ i∈B g̃i
) ‖
= ‖ 1 n ∑ i∈A ( n−m m )gi + 1 n ∑ i∈AD n m gi + 1 n ∑ i∈AR n m gi − 1 n ∑ i∈B g̃i‖
≤ ‖ 1 n ∑ i∈A ( n−m m )gi + 1 n ∑ i∈AD n m gi + 1 n ∑ i∈AR n m gi‖+ ‖ 1 n ∑ i∈B g̃i‖
≤ ‖ ∑ A m− n nm gi + ∑ AD 1 m gi + ∑ AR 1 m gi‖+ ∑ B 1 n ‖g̃i‖
≤ ∑ A ‖m− n nm gi‖+ ∑ AD ‖ 1 m gi‖+ ∑ AR ‖ 1 m gi‖+ ∑ B 1 n ‖g̃i‖
By using the filtering algorithm, we could guarantee that ‖g̃i‖ ≤ L. Let |A| = x, we have |B| = n− x = (1− )m− x, |AR| = m− n = m, |AD| = m− |A| − |AR| = m− x− (m− n) = n− x = (1− )m− x. Thus, we have:
‖µ(G)− µ(N)‖ ≤ xm− n nm L+ (n− x) 1 m L+ (m− n) 1 m L+ (n− x) 1 n L
≤ x(m− n nm − 1 m )L+ n 1 m L+ (m− n) 1 m L+ (n− x) 1 n L = 1
m ( 2 − 1 1− )xL+ L+ L− 1 n xL
= xL( 2 − 2 n ) + 2L
To minimize the upper bound, we need x to be as small as possible since 2 − 2 < 1. According to our problem setting, we have x = n−m ≤ (1− 2 )m, substitute back we have:
‖µ(G)− µ(N)‖ ≤ (1− 2 )Lm(2 − 2 n ) + 2L
= 1− 2 1− 2L+ 2L
= 4L− 1− 2L
Since < 0.5, we use tylor expansion on 1− , by ignoring the high-order terms, we have
‖µ(G)− µ(N)‖ = O( L)
Note, if the Lipschitz continuous assumption does not hold, then L should be dimension dependent.
A.10 PROOF OF RANDOMIZED FILTERING ALGORITHM
Lemma 3 (Gradient Estimation Error for Randomized Filtering) Given a corrupted matrix G̃ ∈ Rm×d generated in problem 2. Let G ∈ Rm×d be the original clean gradient matrix. Suppose we are arbitrary select n = (1− )m rows from G̃ to get remaining set N ∈ Rn×d. Let µ to be the empirical mean function, assume the clean gradient before loss layer has bounded operator norm: ‖W‖op ≤ C, the maximum clean gradient in loss layer maxi ‖αi‖ = k, the maximum corrupted gradient in loss layer maxi ‖δi‖ = v, assume < 0.5, then we have:
‖µ(G)− µ(N)‖ ≤ Ck 3 − 4 2
1− + Cv 1−
A.10.1 PROOF OF LEMMA 3
Denote G̃ to be the set of corrupted minibatch, G to be the set of original clean minibatch and we have |G| = |G̃| = m. Let N to be the set of remaining data and according to our algorithm, the remaining data has the size |N| = n = (1 − )m. Define A to be the set of individual clean gradient, which is not discarded by algorithm 3. B to be the set of individual corrupted gradient, which is not discarded. According to our definition, we have N = A ∪ B. AD to be the set of individual good gradient, which is discarded, AR to be the set of individual good gradient, which is replaced by corrupted data. We have G = A∪AD∪AR. BD is the set of individual corrupted gradient, which is discarded by our algorithm. Denote the good gradient to be gi = αiWi, and the bad gradient to be g̃i = δiWi, according to our assumption, we have ‖Wi‖op ≤ C.
Now, we have the l2 norm error:
‖µ(G)− µ(N)‖ = ‖ 1 m m∑ i∈G gi −
( 1
n ∑ i∈A gi + 1 n ∑ i∈B g̃i
) ‖
= ‖ 1 n m∑ i=1 n m gi −
( 1
n ∑ i∈A gi + 1 n ∑ i∈B g̃i
) ‖
= ‖ 1 n ∑ i∈A n m gi + 1 n ∑ i∈AD n m gi + 1 n ∑ i∈AR n m gi −
( 1
n ∑ i∈A gi + 1 n ∑ i∈B g̃i
) ‖
= ‖ 1 n ∑ i∈A ( n−m m )gi + 1 n ∑ i∈AD n m gi + 1 n ∑ i∈AR n m gi − 1 n ∑ i∈B g̃i‖
≤ ‖ 1 n ∑ i∈A ( n−m m )gi + 1 n ∑ i∈AD n m gi + 1 n ∑ i∈AR n m gi‖+ ‖ 1 n ∑ i∈B g̃i‖ (1)
Let |A| = x, we have |B| = n−x = (1− )m−x, |AR| = m−n = m, |AD| = m−|A|−|AR| = m− x− (m− n) = n− x = (1− )m− x. Thus, we have:
‖µ(G)− µ(N)‖ ≤ ‖ ∑ A m− n nm gi + ∑ AD 1 m gi + ∑ AR 1 m gi‖+ ∑ B 1 n ‖g̃i‖
≤ ∑ A ‖m− n nm gi‖+ ∑ AD ‖ 1 m gi‖+ ∑ AR ‖ 1 m gi‖+ ∑ B 1 n ‖g̃i‖
For individual gradient, according to the label corruption gradient definition in problem 2, assuming the ‖W‖op ≤ C, we have ‖gi‖ ≤ ‖αi‖‖Wi‖op ≤ C‖αi‖. Also, denote maxi ‖αi‖ = k, maxi ‖δi‖ = v, we have ‖gi‖ ≤ Ck, ‖g̃i‖ ≤ Cv.
‖µ(G)− µ(N)‖ ≤ Cxm− n nm k + C(n− x) 1 m k + C(m− n) 1 m k + C(n− x) 1 n v
Note the above upper bound holds for any x, thus, we would like to get the minimum of the upper bound respect to x. Rearrange the term, we have
‖µ(G)− µ(N)‖ ≤ Cx(m− n nm − 1 m )k + Cn 1 m k + C(m− n) 1 m k + C(n− x) 1 n v
= C 1 m ( 2 − 1 1− )xk + Ck + Cv − 1 n Cxv
= Cx ( k(2 − 1) m(1− ) − v n ) + Ck + Cv
= Cx ( k(2 − 1)− v m(1− ) ) + Ck + Cv
Since when < 0.5, k(2 − 1)− v m(1− ) < 0, we knew that x should be as small as possible to continue the bound. According to our algorithm, we knew n −m = m(1 − ) −m = (1 − 2 )m ≤ x ≤ n = (1− )m. Then, substitute x = (1− 2 )m, we have
‖µ(G)− µ(N)‖ ≤ Ck(1− 2 )2 − 1 1− + Ck + Cv − Cv 1− 2 1−
= Ck 3 − 4 2
1− + Cv 1−
A.11 PROOF OF THEOREM 3
According to algorithm3, we could guarantee that v ≤ k, then, we will have:
‖µ(G)− µ(N)‖ ≤ Ck 3 − 4 2
1− + Cv 1−
≤ Ck 4 − 4 2 1− = 4 Ck = O( √q)(C is constant, k is the norm of q-dimensional vector)
A.12 COMPARISON BETWEEN SORTING LOSS LAYER GRADIENT NORM AND SORTING THE
LOSS VALUE
Assume we have a d class label y ∈ Rd, where yk = 1, yi = 0, i 6= k. We have two prediction p ∈ Rd, q ∈ Rd. Assume we have a d class label y ∈ Rd, where yk = 1, yi = 0, i 6= k. With little abuse of notation, suppose we have two prediction p ∈ Rd, q ∈ Rd. Without loss of generality, we could assume that p1 has smaller cross entropy loss, which indicates pk ≥ qk For MSE, assume we have opposite result
‖p− y‖2 ≥ ‖q− y‖2 ⇒ ∑ i 6=k p2i + (1− pk)2 ≥ ∑ i6=k q2i + (1− qk)2 (2)
For each pi, i 6= k, We have
V ar(pi) = E(p 2 i )− E(pi)2 =
1 d− 1 ∑ i 6=k p2i − 1 (d− 1)2 (1− pk)2 (3)
Then ∑ i 6=k p2i + (1− pk)2 ≥ ∑ i 6=k q2i + (1− qk)2
⇒V ari 6=k(pi) + d
(d− 1)2 (1− pk)2 ≥ V ari 6=k(qi) +
d
(d− 1)2 (1− qk)2
⇒V ari 6=k(pi)− V ari 6=k(qi) ≥ d (d− 1)2 ( (1− qk)2 − (1− pk)2 ) ⇒V ari 6=k(pi)− V ari 6=k(qi) ≥ d
(d− 1)2 ((pk − qk)(2− pk − qk))
(4) | 1. What are the main contributions and novel aspects introduced by the paper in robust machine learning?
2. What are the weaknesses of the paper compared to prior works, particularly regarding the assumptions and theoretical results?
3. Do you have any questions or concerns regarding the proposed algorithm's performance and its limitations?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any specific suggestions for improving the paper, such as adding missing proofs or providing more detailed comparisons with other works? | Review | Review
Summary
The papers studies the problem of robust machine learning, where the labels of the a fraction of samples are arbitrarily corrupted. The paper proposes an algorithm to tackle this problem and evaluates it on a standard datasets.
Positives
The paper studies an important problem prevalent in modern machine learning, and proposes two algorithms to solve these problems. The experiments suggest that the proposed algorithm is better than the baselines.
Negatives
The paper does not cite highly relevant papers, overclaims its results, and the theoretical results in this paper are immediate. Moreover, the paper is not well-written. More details are given below:
Page 1: "Instead of developing an accurate criterion for detection corrupted samples, we adopt a novel perspective and focus on limiting the collective impact of corrupted samples during the learning process through robust mean estimation of gradients."
This is not a novel perspective and has been known in robust machine learning community for some time [1,2]. These papers have the same underlying idea, but they are not discussed in this paper. [1] is only briefly mentioned in Remark 2, but the comparison is not fair. The results in [1] hold under fairly general conditions, where the results in this paper require the gradient to be uniformly bounded, which makes the problem significantly simple.
Theorem 2 is a trivial result, well-known in field. Moreover, the way it is presented is misleading and confusing. The error would depend on the quantile of norms in G, which has been hidden under the O(.) notation. The proof is also missing from the paper.
Assumption 1, i.e., Lipschitz continuity of the loss function is very restrictive, which is not satisfied by popular choices of loss function. This assumption trivializes the problem and restricts its applicability.
In the same vein, Theorem 3 assumes unrealistic assumptions. The assumption that
|
|
W
|
|
o
p
≤
C
is very restrictive and does not hold for usual learning tasks. This assumption in a sense is restricting that the covariates x in
R
d
have bounded norms, whereas the norm of a typical vector in
R
d
increases as
d
.
Score
I propose to reject this paper. Prior work ([1,2]) has studied this problem in a much greater generality, which are not discussed in this work. The assumptions in the present work are severely restrictive.
Other major comments:
Robust linear regression, with arbitrary corruptions in responses, has been extensively studied in the literature but they have not been cited. For example, see [3,4]. In particular, the least trimmed squares is an algorithm that removes outliers based on loss values, and comes with a theoretical guarantee via an alternating minimization algorithm [3,4].
Theorem 1 is a folklore, and this should be reflected in main text. Currently, this information is only given in Appendix.
The paper is not well written: 1.Proof of Theorem 2 is missing.
O
(
.
)
notation hides the dependence on the important quantity in the papers.
Important notations have not been defined in the paper.
Abbreviations should not be used, for example, Thm., Algo., Asm., etc.
There are numerous typos and grammatical errors. For example, "has a remarkably impact".
Relevant papers
Diakonikolas, I., G. Kamath, D. M. Kane, J. Li, J. Steinhardt, and A. Stewart. “Sever: A Robust Meta-Algorithm for Stochastic Optimization.” In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 97:1596–1606. Proceedings of Machine Learning Research. PMLR, 2019. http://proceedings.mlr.press/v97/diakonikolas19a.html.
Prasad, A., A. S. Suggala, S. Balakrishnan, and P. Ravikumar. “Robust Estimation via Robust Gradient Estimation.” Journal of the Royal Statistical Society: Series B (Statistical Methodology) 82, no. 3 (July 2020): 601–27. https://doi.org/10.1111/rssb.12364.
Bhatia, K., P. Jain, P. Kamalaruban, and P. Kar. “Consistent Robust Regression.” In Advances in Neural Information Processing Systems 30, NeurIPS 2017, 2110–2119, 2017. http://papers.nips.cc/paper/6806-consistent-robust-regression.
Bhatia, K., P. Jain, and P. Kar. “Robust Regression via Hard Thresholding.” In Advances in Neural Information Processing Systems 28, NeurIPS 2015, 721–729, 2015. http://papers.nips.cc/paper/6010-robust-regression-via-hard-thresholding. |
ICLR | Title
Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations?
Abstract
Vision transformers (ViTs) have recently set off a new wave in neural architecture design thanks to their record-breaking performance in various vision tasks. In parallel, to fulfill the goal of deploying ViTs into real-world vision applications, their robustness against potential malicious attacks has gained increasing attention. In particular, recent works show that ViTs are more robust against adversarial attacks as compared with convolutional neural networks (CNNs), and conjecture that this is because ViTs focus more on capturing global interactions among different input/feature patches, leading to their improved robustness to local perturbations imposed by adversarial attacks. In this work, we ask an intriguing question: “Under what kinds of perturbations do ViTs become more vulnerable learners compared to CNNs?” Driven by this question, we first conduct a comprehensive experiment regarding the robustness of both ViTs and CNNs under various existing adversarial attacks to understand the underlying reason favoring their robustness. Based on the drawn insights, we then propose a dedicated attack framework, dubbed Patch-Fool, that fools the self-attention mechanism by attacking its basic component (i.e., a single patch) with a series of attention-aware optimization techniques. Interestingly, our Patch-Fool framework shows for the first time that ViTs are not necessarily more robust than CNNs against adversarial perturbations. In particular, we find that ViTs are more vulnerable learners compared with CNNs against our Patch-Fool attack which is consistent across extensive experiments, and the observations from Sparse/Mild Patch-Fool, two variants of Patch-Fool, indicate an intriguing insight that the perturbation density and strength on each patch seem to be the key factors that influence the robustness ranking between ViTs and CNNs. It can be expected that our Patch-Fool framework will shed light on both future architecture designs and training schemes for robustifying ViTs towards their real-world deployment. Our codes are available at https://github.com/RICE-EIC/Patch-Fool.
1 INTRODUCTION
The recent performance breakthroughs achieved by vision transformers (ViTs) (Dosovitskiy et al., 2020) have fueled an increasing enthusiasm towards designing new ViT architectures for different vision tasks, including object detection (Carion et al., 2020; Beal et al., 2020), semantic segmentation (Strudel et al., 2021; Zheng et al., 2021; Wang et al., 2021), and video recognition (Arnab et al., 2021; Liu et al., 2021b; Li et al., 2021b; Fan et al., 2021). To fulfill the goal of deploying ViTs into real-world vision applications, the security concern of ViTs is of great importance and challenge, especially in the context of adversarial attacks (Goodfellow et al., 2014), under which an imperceptible perturbation onto the inputs can mislead the models to malfunction.
In response, the robustness of ViTs against adversarial attacks has attracted increasing attention. For example, recent works (Bhojanapalli et al., 2021; Aldahdooh et al., 2021; Shao et al., 2021) find that in addition to ViTs’ decent task performances, they are more robust to adversarial attacks compared with convolutional neural networks (CNNs) under comparable model complexities. In particular, (Shao et al., 2021) claims that ViTs focus more on capturing the global interaction among
∗Equal contribution.
input/feature patches via its self-attention mechanism and the learned features contain less low-level information, leading to superior robustness to the local perturbations introduced by adversarial attacks. A natural response to this seemingly good news would be determining whether ViTs are truly robust against all kinds of adversarial perturbations or if their current win in robustness is an inevitable result of biased evaluations using existing attack methods that are mostly dedicated to CNNs. To unveil the potential vulnerability of ViTs, this work takes the first step in asking an intriguing question: “Under what kinds of perturbations do ViTs become more vulnerable learners compared to CNNs?”, and makes the following contributions:
• We propose a new attack framework, dubbed Patch-Fool, aiming to fool the self-attention mechanism by attacking the basic component (i.e., a single patch) participating in ViTs’ self-attention calculations. Our Patch-Fool attack features a novel objective formulation, which is then solved by Patch-Fool’s integrated attention-aware patch selection technique and attention-aware loss design;
• We evaluate the robustness of both ViTs and CNNs against our Patch-Fool attack with extensive experiments and find that ViTs are consistently less robust than CNNs across various attack settings, indicating that ViTs are not always robust learners and their seeming robustness against existing attacks can be overturned under dedicated adversarial attacks;
• We further benchmark the robustness of both ViTs and CNNs under two variants of PatchFool, i.e., Sparse Patch-Fool and Mild Patch-Fool, and discover that the perturbation density, defined as the number of perturbed pixels per patch, and the perturbation strength highly influence the robustness ranking between ViTs and CNNs, where our Patch-Fool is an extreme case of high perturbation density and strength.
We believe our work has opened up a new perspective for exploring ViTs’ vulnerability and understanding the different behaviors of CNNs and ViTs under adversarial attacks, and can provide insights to both future architecture designs and training schemes for robustifying ViTs towards their real-world deployment.
2 RELATED WORKS
Vision transformers. Motivated by the great success of Transformers in the natural language processing (NLP) field (Vaswani et al., 2017), ViTs have been developed by splitting an input image into a series of image patches and adopting self-attention modules for encoding the image (Dosovitskiy et al., 2020), and been shown to achieve competitive or superior performance over CNNs via dedicated data augmentation (Touvron et al., 2021) or self-attention structures (Yang et al., 2021; Graham et al., 2021; Liu et al., 2021a). As such, there has been tremendously increased attention on applying ViTs to various computer vision applications, such as self-supervised learning (Caron et al., 2021; Chen et al., 2021b; Xie et al., 2021; Li et al., 2021a), object detection (Carion et al., 2020; Beal et al., 2020), and semantic segmentation (Strudel et al., 2021; Zheng et al., 2021; Wang et al., 2021). The achievable performance of ViTs are continuously refreshed by emerging ViT variants, which provide new arts for designing ViT architectures. For example, convolutional modules have been incorporated into ViTs for capturing low-level features (Xiao et al., 2021; Wu et al., 2021; Graham et al., 2021; Peng et al., 2021), and replacing the global self-attention mechanism with local self-attention modules (Liu et al., 2021a; Dong et al., 2021; Liang et al., 2021; Liu et al., 2021b; Chu et al., 2021) has further pushed forward ViTs’ achievable accuracy-efficiency trade-off. Motivated by the growing interest in deploying ViTs into real-world applications, this work aims to better understand the robustness of ViTs and to develop adversarial attacks dedicated to ViTs.
Adversarial attack and defense. Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks (Goodfellow et al., 2014), i.e., imperceptible perturbations onto the inputs can mislead DNNs to make wrong predictions. As adversaries, stronger attacks are continuously developed, including both white-box (Madry et al., 2017; Croce & Hein, 2020; Carlini & Wagner, 2017; Papernot et al., 2016; Moosavi-Dezfooli et al., 2016) and black-box ones (Chen et al., 2017; Ilyas et al., 2018b; Andriushchenko et al., 2020; Guo et al., 2019; Ilyas et al., 2018a), which aggressively degrade the performances of the target DNN models. In particular, (Brown et al., 2017; Liu et al., 2020) build universal adversarial patches that are able to attack different scenes and (Liu et al., 2018; Zhao et al., 2020; Hoory et al., 2020) adopt adversarial patches to attack object detectors. However, these works focus on merely CNNs, questions regarding (1) whether patch-wise attacks
are effective for ViTs as compared to CNNs, and (2) how to efficiently construct strong patch-wise attacks utilizing the unique structures of ViTs are still under-explored yet interesting to be studied, especially considering patches are the basic elements for composing the inputs of ViTs. In response, various defense methods (Guo et al., 2017; Xie et al., 2017; Cohen et al., 2019; Metzen et al., 2017; Feinman et al., 2017; Fu et al., 2021a;b; Shafahi et al., 2019; Madry et al., 2017; Wong et al., 2019) have been proposed to improve DNNs’ robustness against those attacks. The readers are referred to (Akhtar & Mian, 2018; Chakraborty et al., 2018) for more attack and defense methods.
Robustness of vision transformers. Driven by the impressive performance recently achieved by ViTs in various vision tasks, their robustness has gained increasing attention. A consistent observation drawn by pioneering works that study ViTs’ robustness is that ViTs are more robust to adversarial attacks than CNNs since ViTs are more capable of capturing the global interactions among patches, while CNNs focus on local features and thus are more vulnerable to local adversarial perturbations. In particular, (Bhojanapalli et al., 2021) shows that ViT models pretrained with a sufficient amount of data are at least as robust as their ResNet counterparts on a broad range of perturbations, including natural corruptions, distribution shifts, and adversarial perturbations; (Aldahdooh et al., 2021) finds that vanilla ViTs or hybrid-ViTs are more robust than CNNs under Lp-based attacks; and (Shao et al., 2021) further explains that ViTs’ learned features contain less low-level information and are more generalizable, leading to their superior robustness, and introducing convolutional blocks that extract more low-level features will reduce the ViTs’ adversarial robustness. In addition, ViTs’ adversarial transferability has also been studied: (Mahmood et al., 2021) shows that adversarial examples do not readily transfer between CNNs and transformers and (Naseer et al., 2021; Wei et al., 2021) propose techniques to boost the adversarial transferability between ViTs and from ViTs to CNNs. In parallel, (Mao et al., 2021) refines ViTs’ architecture design to improve robustness. In our work, we challenge the common belief that ViTs are more robust than CNNs, which is concluded based on evaluations using existing attack methods, and propose to customize adaptive attacks utilizing ViTs’ captured patch-wise global interactions to make ViTs weaker learners.
3 THE PROPOSED PATCH-FOOL FRAMEWORK
In this section, we present our Patch-Fool attack method that perturbs a whole patch to fool ViTs and unveils a vulnerable perspective of ViTs.
3.1 PATCH-FOOL: VALIDATING AND RETHINKING THE ROBUSTNESS OF VITS
We extensively evaluate the robustness of several representative ViT variants against four state-ofthe-art adversarial attacks (i.e., PGD (Madry et al., 2017), AutoAttack (Croce & Hein, 2020), CWL∞ (Carlini & Wagner, 2017), and CW-L2) with different perturbation strengths in Appendix. A.1. We observe that (1) ViTs are consistently more robust than CNNs with comparable model complexities under all attack methods, which is consistent with the previous observations (Bhojanapalli et al., 2021; Aldahdooh et al., 2021; Shao et al., 2021), and (2) ViT variants equipped with local self-attention ((Swin (Liu et al., 2021a))) or convolutional modules (LeViT (Graham et al., 2021)), which improve the model capability in capturing local features and thus boosts the clean accuracy, are more vulnerable to adversarial attacks, although they are still more robust than CNNs with comparable complexities. This indicates that the global attention mechanism itself can serve as a good robustification technique against existing adversarial attacks, even in lightweight ViTs with small model complexities. For example, as shown in Fig. 1, the gap between the attention maps generated by clean and adversarial inputs in deeper layers remains small. We are curious about “Are the global attentions in ViTs truly robust, or their vulnerability has not been fully explored and exploited?”. To answer this, we propose our customized attack in the following sections.
3.2 PATCH-FOOL: MOTIVATION
Given the insensitivity of ViTs’ self-attention mechanism to local perturbations, we pay a close attention to the basic component (i.e., a single patch) participating in the self-attention calculation, and hypothesize that customized adversarial perturbations onto a patch can be more effective in fooling the captured patch-wise global interactions of self-attention modules than attacking the CNN modules. This is also inspired by the word substitution attacks (Alzantot et al., 2018; Ren et al., 2019; Jin et al., 2019; Zang et al., 2019) to Transformers in NLP tasks, which replace a word with its synonyms, and here an image patch in ViTs serves a similar role as a word.
3.3 PATCH-FOOL: SETUP AND OBJECTIVE FORMULATION
Attack setup. In our proposed Patch-Fool Attack, we do not limit the perturbation strength onto each pixel and, instead, constrain all the perturbed pixels within one patch (or several patches), which can be viewed as a variant of sparse attacks (Dong et al., 2020; Modas et al., 2019; Croce & Hein, 2019). Such attack strategies will lead to adversarial examples with a noisy patch as shown in Fig. 1, which visually resembles and emulates natural corruptions in a small region of the original image, e.g., one noisy patch only counts for 1/196 in the inputs of DeiT-S (Touvron et al., 2021), caused by potential defects of the sensors or potential noises/damages of the optical devices.
Objective formulation. Given the loss function J and a series of input image patches X = [x1, · · · ,xn]⊤ ∈ Rn×d with its associated label y, the objective of our adversarial algorithm can be formulated as:
argmax 1≤p≤n,E∈Rn×d
J(X+ 1p ⊙E, y) (1)
where E denotes the adversarial perturbation, 1p ∈ Rn such that 1p(i) = { 0, i ̸= p 1, i = p is a one hot vector, and ⊙ represents the penetrating face product such that a⊙B = [a ◦ b1, · · · ,a ◦ bd] where ◦ is the Hadamard product and bj is the j-th column of matrix B. For solving Eq. 1, our Patch-Fool needs to (1) select the adversarial patch p, and (2) optimize the corresponding E as elaborated in Sec. 3.4 and Sec. 3.5, respectively.
3.4 PATCH-FOOL: DETERMINE p VIA ATTENTION-AWARE PATCH SELECTION
Denoting a(l,h,i) = [a(l,h,i)1 , · · · , a (l,h,i) n ] ∈ Rn as the attention distribution for the i-th token of the h-th head in the l-th layer. For each layer l, we define:
s (l) j = ∑ h,i a (l,h,i) j (2)
which measures the importance of the j-th token in the l-th layer based on its contributions to other tokens in the self-attention calculation. For better fooling ViTs, we select the most influential patch p derived from argmax
j s (l) j according to a predefined value l. We fix l = 5 by default since the patches
at later self-attention layers are observed to be diverse from the input patches due to the increased information mixed from other patches, making them non-ideal for guiding the selection of input patches as justified in Sec. 4.3.
3.5 PATCH-FOOL: OPTIMIZE E VIA ATTENTION-AWARE LOSS
Given the selected adversarial patch index p from the above step, we define the attention-aware loss for the l-th layer as follows:
J (l) ATTN(X, p) = ∑ h,i a(l,h,i)p (3)
which is expected to be maximized so that the adversarial patch p, serving as the target adversarial patch, can attract more attention from other patches for more effectively fooling ViTs. The perturbation E is then updated based on both the final classification loss, i.e., the cross-entropy loss JCE, and a layer-wise attention-aware loss:
J(X̃, y, p) = JCE(X̃, y) + α ∑ l J (l) ATTN(X̃, p) (4)
where X̃ ≜ X+ 1p ⊙E and α is a weighted coefficient for controlling ∑ l J (l) ATTN(X̃, p). We further adopt PCGrad (Yu et al., 2020) to avoid the gradient conflict of two losses, and thus the update of perturbation E is calculated using the following equation
δE = ∇EJ(X̃, y, p)− α ∑ l βl∇EJCE(X̃, y) (5)
where
βl = 0,
〈 ∇EJCE(X̃, y),∇EJ (l)ATTN(X̃, p) 〉 > 0〈
∇EJCE(X̃,y),∇EJ(l)ATTN(X̃,p) 〉
∥∇EJCE(X̃,y)∥2 , otherwise
(6)
Following PGD (Madry et al., 2017), we iteratively update E using an Adam optimizer (Kingma & Ba, 2014): Et+1 = Et + η ·Adam(δEt) (7) where η is the step size for each update.
3.6 SPARSE PATCH-FOOL: A SPARSE VARIANT OF PATCH-FOOL
Motivation. One natural question associated with Patch-Fool is: “How many pixels within a patch are needed to be perturbed for effectively misleading the model to misclassify the input image?”. There exist two extreme cases: (1) perturbing only a few pixels that lead to local perturbations against which ViTs are more robust, and (2) perturbing the whole patch, i.e., our vanilla Patch-Fool. We hypothesize that answering this question helps better understand under what circumstances ViTs are more (or less) robust than CNNs. To this end, we study a variant of Patch-Fool, dubbed Sparse Patch-Fool, as defined below.
Objective formulation. For enabling Sparse Patch-Fool, we add a sparse constraint to Eq. 1, i.e.: argmax
1≤p≤n,E∈Rn×d,M∈{0,1}n×d J(X+ 1p ⊙ (M ◦E), y) s.t. ∥M∥0 ≤ k (8)
where we use a binary mask M with a predefined sparsity parameter k to control the sparsity of E. To effectively learn the binary distribution of M, we parameterize M as a continuous value M̂, following (Ramanujan et al., 2020; Diffenderfer & Kailkhura, 2021). During forward, only the top k highest elements of M̂ is activated and set to 1 and others are set to 0 to satisfy the target sparsity constraint; and during backward, all the elements in M̂ will be updated via straight-through estimation (Bengio et al., 2013). We jointly optimize M̂ with E as in Eq. 7.
3.7 MILD PATCH-FOOL: A MILD VARIANT OF PATCH-FOOL
In addition to the number of perturbed pixels manipulated by Sparse Patch-Fool, the perturbation strength is another dimension for measuring the perturbations within a patch. We also propose a mild variant of Patch-Fool, dubbed Mild Patch-Fool, with a constraint on the norm of the perturbation E to ensure ∥E∥2 ≤ ϵ or ∥E∥∞ ≤ ϵ which are known as the L2 and L∞ constraint, respectively. We achieve this by scaling (for the L2 constraint) or clipping (for the L∞ constraint) E after updating it.
4 EVALUATION OF PATCH-FOOL
4.1 EVALUATION SETUP
Models and datasets. We mainly benchmark the robustness of the DeiT (Touvron et al., 2021) family with the ResNet (He et al., 2016) family, using their official pretrained models. Note that we adopt DeiT models without distillation for a fair comparison. We randomly select 2500 images from the validation set of ImageNet for evaluating robustness, following (Bhojanapalli et al., 2021).
Patch-Fool settings. The weight coefficient α in Eq. 4 is set as 0.002. The step size η in Eq. 7 is initialized to be 0.2 and decayed by 0.95 every 10 iterations, and the number of total iterations is 250. For evaluating Patch-Fool with different perturbation strengths, we allow Patch-Fool to attack up to four patches based on the attention-aware patch selection in Sec. 3.4, i.e., the patches with top importance scores defined in Eq. 2 will be selected. Note that we report the robust accuracy instead of the attack success rate throughout this paper as our main focus is the robustness benchmark.
4.2 BENCHMARK THE ROBUSTNESS OF VITS AND CNNS AGAINST PATCH-FOOL
We adopt our Patch-Fool to attack ViTs and use the saliency map to guide the patch selection for attacking CNNs, which is the strongest attack setting as shown in Sec. 4.3. The resulting robust accuracy of both DeiT and ResNet families under different numbers of attacked patches is shown in Fig. 2. We can observe that DeiT models are consistently less robust against Patch-Fool than their ResNet counterparts under similar model complexity, e.g., compared with ResNet-50, DeiT-S suffers from a 16.31% robust accuracy drop under the single-patch attack of Patch-Fool, although it has a 3.38% and 18.70% higher clean and robustness accuracy against PGD-20 (ϵ = 0.001), respectively. This indicates that ViTs are not always robust learners as they may underperform under customized perturbations
as compared to CNNs and their seeming robustness against existing attacks can be overturned.
4.3 ABLATION STUDY: EFFECTIVENESS OF THE ATTENTION-AWARE PATCH SELECTION
To validate the effectiveness of our attention-aware patch selection method, we benchmark two variants of patch selection mechanism: (1) random patch selection, and (2) saliencymap-based patch selection. For the latter one, we adopt the averaged saliency score of a patch, defined as the averaged absolution value of the gradients on each pixel in a patch following (Simonyan et al., 2013), as the metric to select patches. For a fair comparison, we only adopt the final cross-entropy loss JCE in Eq. 4 in this set of experiments. As shown in Tab. 2, we can see that (1) among the three strategies for attacking ViTs, our attention-aware patch selection is the most effective strategy in most cases and thus we adopt it by default; (2) DeiT variants are still consistently less robust than their ResNet counterparts under similar model complexity, indi-
cating that attacking the basic component participating in self-attention calculations can indeed effectively degrade ViTs’ robustness; and (3) Patch-Fool equipped with random patch selection, with a 2.64% robust accuracy gap against the best strategy, can already effectively degrade DeiTs’
robustness, while it cannot effectively attack ResNets without the guidance from the saliency map, indicating ViTs are generally more vulnerable than CNNs to patch-wise perturbations.
We also perform an ablation study for l in Eq. 2 based on which layer the attention-aware patch selection is performed. As shown in Tab. 2, selecting the early layers generally achieves consistent better results than that of later layers, which we conjecture is because patches in early layers can still roughly maintain the original information extracted from the inputs while their
counterparts in later layers are mixed with information from other patches, leading to an inferior guidance for selecting the perturbed patch. This conjecture is validated by the observed phase change in the attention map, i.e., after the 6-th layer, more complexity correlations between patches are captured in addition to the diagonal ones. Therefore, we set l = 5 by default.
4.4 ABLATION STUDY: EFFECTIVENESS OF THE ATTENTION-AWARE LOSS
To evaluate the effectiveness of our attentionaware loss with the cosine-similarity-based reweighting mechanism (see Sec. 3.5), we compare it with two baselines: (1) train with only the final cross-entropy loss, i.e., JCE is enabled without the attention-aware loss, and (2) βl = 0, ∀l ∈ [1, 12], i.e., the layerwise J (l)ATTN in Eq. 4 is directly summed together with the final JCE. As shown in Tab. 3, we can observe that (1) our attention-aware loss equipped with the cosine-
similarity-based re-weighting strategy consistently achieves the best attack performance, e.g., a 3.17% reduction in robust accuracy compared with the baseline without attention-aware loss; and (2) directly summing up all the losses leads to poor convergence especially under limited perturbed patches.
4.5 BENCHMARK AGAINST SPARSE PATCH-FOOL
Setup. To study the influence of the sparsity of perturbed pixels for both CNNs and ViTs, we evaluate our proposed Sparse Patch-Fool via varying the global perturbation ratio (PR) of the whole image (i.e., k/total-pixel) as well as the number of patches allowed to be perturbed.
Benchmark the robustness of ViTs and CNNs. As shown in Tab. 4, under different perturbation ratios and numbers of perturbed patches, neither ViTs nor CNNs will always be the winner in robustness. In particular, under relatively small perturbation ratios or more perturbed patches (e.g., when all patches are allowed to be perturbed), CNNs will suffer from worse robustness, while ViTs will be more vulnerable learners under relatively large perturbation ratios as well as fewer perturbed patches.
Influence of the number of perturbed patches. We further study the influence of the number of perturbed patches under the same global perturbation ratio as shown in Tab. 5. We can see that (1) under a small perturbation ratio of 0.05%
which is closer to local perturbations, CNNs are the consistent loser in robustness; and (2) under a relatively large perturbation ratio of 0.5%, although increasing the number of perturbed patches leads to a consistent reduction in CNNs’ robust accuracy, the robustness reduction for ViTs will quickly saturate, i.e., ViTs gradually switch from the loser to the winner in robustness as compared to CNNs.
Insights. We analyze that smaller perturbation ratios under the same number of perturbed patches or more perturbed patches under the same perturbation ratio will lead to less perturbed pixels within one patch, i.e., a lower perturbation density, which is closer to local perturbations against which ViTs are thus more robust than CNNs. In contrast, given more perturbed pixels in one patch, i.e., a higher perturbation density for which an extreme case is our vanilla Patch-Fool, ViTs become more vulnerable learners than CNNs. This indicates that a high perturbation density can be a perspective for exploring ViTs’ vulnerability, which has been neglected by existing adversarial attacks.
Considering the perturbation strength is another dimension for measuring the perturbations within a patch in addition to the perturbation density, we evaluate our proposed Mild Patch-Fool in Sec. 4.6.
4.6 BENCHMARK AGAINST MILD PATCH-FOOL
Setup. To study the influence of the perturbation strength within each patch, we evaluate our proposed Mild Patch-Fool in Sec. 4.6 with L2 or L∞ constraints on the patch-wise perturbations with
different strengths indicated by ϵ. Note that the perturbation strength ϵ of L2-based Mild Patch-Fool is summarized over all perturbed patches. We benchmark both the DeiT and ResNet families with different numbers of perturbed patches as shown in Tab. 6 and Tab. 7.
Observations and analysis. We can observe that (1) the robust accuracy will be degraded more by larger perturbation strength indicated by ϵ under both L2 and L∞ constraints, and (2) more importantly, DeiTs are more robust than ResNets under small ϵ, and gradually become more vulnerable than ResNets as ϵ increases. For example, as gradually increasing ϵ from 8/255 to 128/255 under L∞ attacks, DeiT-S switches from the winner to the loser in robustness as compared to ResNet-50.
Insights. This set of experiments, together with the analysis in Sec. 4.5, reflects that the perturbation density and the perturbation strength are two key determinators for the robustness ranking between ViTs and CNNs: higher/lower perturbation density or perturbation strength will make ViTs the loser/winner in robustness. This first-time finding can enhance the understanding about the robustness ranking between ViTs and CNNs, and aid the decision making about which models to deploy in different real-world scenarios with high security awareness.
We also benchmark the effectiveness of Patch-Fool on top of adversarial trained ViTs/CNNs, evaluate the patch-wise adversarial transferability of Patch-Fool, and visualize the adversarial examples generated by our Patch-Fool’s different variants in Appendix. A.2∼ A.4, respectively.
5 CONCLUSION
The recent breakthroughs achieved by ViTs in various vision tasks have attracted an increasing attention on ViTs’ robustness, aiming to fulfill the goal of deploying ViTs into real-world vision applications. In this work, we provide a new perspective regarding ViTs’ robustness and propose a novel attack framework, dubbed Patch-Fool, to attack the basic component (i.e., a single patch) in ViTs’ self-attention calculations, against which ViTs are found to be more vulnerable than CNNs. Interestingly, the proposed Sparse Patch-Fool and Mild Patch-Fool attacks, two variants of our Patch-Fool, further indicate that the perturbation density and perturbation strength onto each patch seem to be the two key factors that determine the robustness ranking between ViTs and CNNs. We believe this work can shed light on better understanding ViTs’ robustness and inspire innovative defense techniques.
ACKNOWLEDGEMENT
The work is supported by the National Science Foundation (NSF) through the MLWiNS program (Award number: 2003137), an NSF CAREER award (Award number: 2048183), and the RTML program (Award number: 1937592).
A APPENDIX
A.1 EVALUATING ROBUSTNESS OF VITS AND CNNS UNDER EXISTING ATTACKS
Although various comparisons on the robustness of ViTs and CNNs have been explored in pioneering works (Bhojanapalli et al., 2021; Aldahdooh et al., 2021; Shao et al., 2021), their evaluation suffers from one of the following limitations: (1) only adopt weak attack methods, (2) only adopt early ViT designs without considering recently advanced ViT architectures, and (3) do not adopt the official and latest pretrained models and suffer from inferior clean accuracies. To this end, we extensively evaluate the robustness against common white-box attacks of several representative ViT variants, which cover the popular trends in designing ViT architectures, including (1) using local self-attention (Swin (Liu et al., 2021a)), which adopts the attention mechanism within a local region instead of the global ones in vanilla ViTs to capture low-level features and reduce the computational cost, and (2) introducing the inductive bias of CNNs to build hybrid models (LeViT (Graham et al., 2021)).
A.1.1 EVALUATION SETUP Models and datasets. We evaluate the robustness of three ViT families (i.e., DeiT (Touvron et al., 2021), Swin (Liu et al., 2021a), and LeViT (Graham et al., 2021)) and two CNN families (ResNet (He et al., 2016) and VGG (Simonyan & Zisserman, 2014)) on ImageNet using their official implementation and pretrained models. Note that we adopt DeiT models without distillation, which only improves the training schedule over vanilla ViTs, for a fair comparison.
Attack settings. We adopt four adversarial attacks (i.e., PGD (Madry et al., 2017), AutoAttack (Croce & Hein, 2020), CW-L∞ (Carlini & Wagner, 2017), and CW-L2) with different perturbation strengths. In particular, for the CW-L∞ and CW-L2 attacks, we adopt the implementation in AdverTorch (Ding et al., 2019) and the same settings as (Chen et al., 2021a; Rony et al., 2019); For AutoAttack, we adopt the official implementation and default settings in (Croce & Hein, 2020).
A.1.2 OBSERVATIONS AND ANALYSIS Observations. From the evaluation results summarized in Tab. 8, we make the following observations: (1) ViTs are consistently more robust than CNNs with comparable model complexities under all attack methods, which is consistent with the previous observations (Bhojanapalli et al., 2021; Aldahdooh et al., 2021; Shao et al., 2021). In particular, DeiT-S/DeiT-B achieves a 18.70%/10.11% higher robust accuracy over ResNet-50/ResNet-152 under PGD-20 attacks with a perturbation strength of 0.001; (2) compared with vanilla ViTs, ViT variants equipped with local self-attention or convolutional modules, which improves the model capability to capture local features and thus boosts the clean accuracy, are more vulnerable to adversarial attacks, although they are still more robust than CNNs with comparable complexities. For example, Swin-T/Swin-B suffers from a 14.90%/5.04% robust accuracy drop compared with DeiT-S/DeiT-B under PGD-20 attacks with a perturbation strength of 0.001; and (3) the degree of overparameterization has less influence in the robustness for the same
family of ViT models compared with its great influence in CNNs’ robustness, as the most lightweight DeiT-Ti can already achieve a comparable robust accuracy (-0.58%) as ResNet-152, while requiring 9.17×/46× less floating-point operations (FLOPs)/parameters. Analysis. Combining the three insights drawn from the aforementioned observations, we can observe the superiority of the global attention mechanism over convolutional and local self-attention blocks, in terms of both improved robust accuracy and reduced sensitivity to the degree of model overparameterization. This indicates that the global attention mechanism itself can serve as a good robustification technique against existing adversarial attacks, even in lightweight ViTs with small model complexities. For example, as shown in Fig. 1, the gap between the attention maps generated by clean and adversarial inputs in deeper layers remains small. We wonder that "Are the global attentions in ViTs truly robust, or their vulnerability has not been fully explored and exploited?". To answer this, we propose our customized attack Patch-Fool in Sec. 3 and find that the vulnerability of global attentions can be utilized to degrade the robustness of ViTs, making them more vulnerable learners than CNNs.
A.2 PATCH-FOOL ON TOP OF ADVERSARIALLY TRAINED MODELS
To study the influence of robust training algorithms against our Patch-Fool, we further benchmark the robustness of both adversarially trained ViTs and CNNs.
Setup. We apply Fast Adversarial Training (FAT) (Wong et al., 2019) with an ϵ of 2/255 and 4/255 under the L∞ constraint on top of both DeiT-Ti and ResNet-18 on ImageNet. We report the robust accuracy of the FAT trained models against our Patch-Fool in Tabs. 9 and 10.
Observations and analysis. From Tab. 9, we can observe that although FAT improves the robustness of both DeiT-Ti and ResNet-18 against our Patch-Fool attacks, DeiT-Ti is still more vulnerable against Patch-Fool than ResNet-18 under the same number of perturbed patches. In addition, we can observe from Tab. 10 that (1) stronger adversarial training with larger ϵ leads to better robustness against both PGD attacks and our Patch-Fool, and (2) the improvement in robust accuracy against PGD attacks is higher than the one against Patch-Fool, indicating that enhanced adversarial training schemes or other defense methods are required to robustify ViTs against our Patch-Fool, which is also our future work.
A.3 PATCH-WISE ADVERSARIAL TRANSFERABILITY OF PATCH-FOOL
We further discuss the patch-wise adversarial transferability of Patch-Fool, i.e., transfer the perturbations generated for attacking one specific patch to attack other patches on the same image.
Setup. Without losing generality, we generate the adversarial perturbation for the center patch with Patch-Fool which is adopted to attack all other patches on the same image and the resulting robust accuracy is annotated in Fig. 3. We average the robust accuracy at each patch location over a batch of 128 images.
Observations. We can observe that the adversarial patches generated by Patch-Fool can be transferred to neighboring patches with more notable accuracy degradation, while the adversarial transferability between patches far away from each other is poor.
A.4 VISUALIZING THE ADVERSARIAL EXAMPLES GENERATED BY PATCH-FOOL’S VARIANTS
Here we visualize the adversarial examples generated by Patch-Fool’s variants in Fig. 4, including (1) Patch-Fool with different number of perturbed patches (rows 2∼3), (2) Sparse Patch-Fool with a total of 250 perturbed pixels distributed in different number of perturbed patches (rows 4∼6), and (3) Mild Patch-Fool under L2 and L∞ constraints (rows 7∼8). The corresponding robust accuracy is also annotated.
Observations. From the aforementioned visualization in Fig. 4, we can observe that (1) the adversarial patches generated by Patch-Fool visually resemble and emulate natural corruptions in a small region of the original image caused by potential defects of the sensors or potential noises/damages of the optical devices (see row 2), (2) more perturbed patches lead to a lower robust accuracy and worse imperceptibility (see row 3), (3) the generated adversarial perturbations of our Sparse Patch-Fool resemble impulse noises, which improves imperceptibility while still notably degrading the robust accuracy especially when perturbing more patches (see rows 4∼6), and (4) adding L2 and L∞ constraints will notably improve the imperceptibility while incurring less degradation in the robust accuracy (rows 7∼8). | 1. What is the main contribution of the paper, and how does it build upon previous research?
2. What are the strengths and weaknesses of the paper, particularly regarding its experimental design and claims?
3. How does the reviewer assess the novelty and significance of the proposed attack framework?
4. Are there any minor suggestions or discussions that could enhance the paper's content or relevance?
5. How does the reviewer view the relationship between the paper's findings and the broader context of adversarial robustness in computer vision models? | Summary Of The Paper
Review | Summary Of The Paper
Given recent finding shows that ViTs are more robust than CNN, this paper investigates an intriguing question: “Under what kinds of perturbations do ViTs become weaker learners compared to CNNs". They propose a Patch-Fool to fool the attention mechanism. Their investigation leads to some interesting findings and might inspire more interesting future work.
Review
The strengths of this work include (a) extensive experiment to benchmark the robustness of different ViT varinst; (b) proposing a new attacj framework; (c) insightful findings.
Weaknesses: (a) Given those existing findings regarding the robustness comparison between ViT and CNN, this work might look a little incremental. This work clearly recognizes those works and I think it is not necessarily a negative point. (b) The authors mention that their work might inspire innovative defense techniques. I do not see the rationale behind this claim. I suggest the authors remove this claim or illustrate it more clearly.
Minor suggestions and discussions: A recent work [1] has also investigated the MLP-Mixer beyond ViT. It is suggested to discuss it. Is is possible to apply the proposed attack method also to MLP-Mixer. [1] also discusses universal attacks and I am curious whether the above attack method can be extended beyond Image-dependent attack to universal ones. Do the authors think the reasons that ViT are weaker against Patch-Fool might be explained from the shift invariance perspective [1,2]?
[1] Adversarial Robustness Comparison of Vision Transformer and MLP-Mixer to CNNs [2] Shift Invariance Can Reduce Adversarial Robustness |
ICLR | Title
Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations?
Abstract
Vision transformers (ViTs) have recently set off a new wave in neural architecture design thanks to their record-breaking performance in various vision tasks. In parallel, to fulfill the goal of deploying ViTs into real-world vision applications, their robustness against potential malicious attacks has gained increasing attention. In particular, recent works show that ViTs are more robust against adversarial attacks as compared with convolutional neural networks (CNNs), and conjecture that this is because ViTs focus more on capturing global interactions among different input/feature patches, leading to their improved robustness to local perturbations imposed by adversarial attacks. In this work, we ask an intriguing question: “Under what kinds of perturbations do ViTs become more vulnerable learners compared to CNNs?” Driven by this question, we first conduct a comprehensive experiment regarding the robustness of both ViTs and CNNs under various existing adversarial attacks to understand the underlying reason favoring their robustness. Based on the drawn insights, we then propose a dedicated attack framework, dubbed Patch-Fool, that fools the self-attention mechanism by attacking its basic component (i.e., a single patch) with a series of attention-aware optimization techniques. Interestingly, our Patch-Fool framework shows for the first time that ViTs are not necessarily more robust than CNNs against adversarial perturbations. In particular, we find that ViTs are more vulnerable learners compared with CNNs against our Patch-Fool attack which is consistent across extensive experiments, and the observations from Sparse/Mild Patch-Fool, two variants of Patch-Fool, indicate an intriguing insight that the perturbation density and strength on each patch seem to be the key factors that influence the robustness ranking between ViTs and CNNs. It can be expected that our Patch-Fool framework will shed light on both future architecture designs and training schemes for robustifying ViTs towards their real-world deployment. Our codes are available at https://github.com/RICE-EIC/Patch-Fool.
1 INTRODUCTION
The recent performance breakthroughs achieved by vision transformers (ViTs) (Dosovitskiy et al., 2020) have fueled an increasing enthusiasm towards designing new ViT architectures for different vision tasks, including object detection (Carion et al., 2020; Beal et al., 2020), semantic segmentation (Strudel et al., 2021; Zheng et al., 2021; Wang et al., 2021), and video recognition (Arnab et al., 2021; Liu et al., 2021b; Li et al., 2021b; Fan et al., 2021). To fulfill the goal of deploying ViTs into real-world vision applications, the security concern of ViTs is of great importance and challenge, especially in the context of adversarial attacks (Goodfellow et al., 2014), under which an imperceptible perturbation onto the inputs can mislead the models to malfunction.
In response, the robustness of ViTs against adversarial attacks has attracted increasing attention. For example, recent works (Bhojanapalli et al., 2021; Aldahdooh et al., 2021; Shao et al., 2021) find that in addition to ViTs’ decent task performances, they are more robust to adversarial attacks compared with convolutional neural networks (CNNs) under comparable model complexities. In particular, (Shao et al., 2021) claims that ViTs focus more on capturing the global interaction among
∗Equal contribution.
input/feature patches via its self-attention mechanism and the learned features contain less low-level information, leading to superior robustness to the local perturbations introduced by adversarial attacks. A natural response to this seemingly good news would be determining whether ViTs are truly robust against all kinds of adversarial perturbations or if their current win in robustness is an inevitable result of biased evaluations using existing attack methods that are mostly dedicated to CNNs. To unveil the potential vulnerability of ViTs, this work takes the first step in asking an intriguing question: “Under what kinds of perturbations do ViTs become more vulnerable learners compared to CNNs?”, and makes the following contributions:
• We propose a new attack framework, dubbed Patch-Fool, aiming to fool the self-attention mechanism by attacking the basic component (i.e., a single patch) participating in ViTs’ self-attention calculations. Our Patch-Fool attack features a novel objective formulation, which is then solved by Patch-Fool’s integrated attention-aware patch selection technique and attention-aware loss design;
• We evaluate the robustness of both ViTs and CNNs against our Patch-Fool attack with extensive experiments and find that ViTs are consistently less robust than CNNs across various attack settings, indicating that ViTs are not always robust learners and their seeming robustness against existing attacks can be overturned under dedicated adversarial attacks;
• We further benchmark the robustness of both ViTs and CNNs under two variants of PatchFool, i.e., Sparse Patch-Fool and Mild Patch-Fool, and discover that the perturbation density, defined as the number of perturbed pixels per patch, and the perturbation strength highly influence the robustness ranking between ViTs and CNNs, where our Patch-Fool is an extreme case of high perturbation density and strength.
We believe our work has opened up a new perspective for exploring ViTs’ vulnerability and understanding the different behaviors of CNNs and ViTs under adversarial attacks, and can provide insights to both future architecture designs and training schemes for robustifying ViTs towards their real-world deployment.
2 RELATED WORKS
Vision transformers. Motivated by the great success of Transformers in the natural language processing (NLP) field (Vaswani et al., 2017), ViTs have been developed by splitting an input image into a series of image patches and adopting self-attention modules for encoding the image (Dosovitskiy et al., 2020), and been shown to achieve competitive or superior performance over CNNs via dedicated data augmentation (Touvron et al., 2021) or self-attention structures (Yang et al., 2021; Graham et al., 2021; Liu et al., 2021a). As such, there has been tremendously increased attention on applying ViTs to various computer vision applications, such as self-supervised learning (Caron et al., 2021; Chen et al., 2021b; Xie et al., 2021; Li et al., 2021a), object detection (Carion et al., 2020; Beal et al., 2020), and semantic segmentation (Strudel et al., 2021; Zheng et al., 2021; Wang et al., 2021). The achievable performance of ViTs are continuously refreshed by emerging ViT variants, which provide new arts for designing ViT architectures. For example, convolutional modules have been incorporated into ViTs for capturing low-level features (Xiao et al., 2021; Wu et al., 2021; Graham et al., 2021; Peng et al., 2021), and replacing the global self-attention mechanism with local self-attention modules (Liu et al., 2021a; Dong et al., 2021; Liang et al., 2021; Liu et al., 2021b; Chu et al., 2021) has further pushed forward ViTs’ achievable accuracy-efficiency trade-off. Motivated by the growing interest in deploying ViTs into real-world applications, this work aims to better understand the robustness of ViTs and to develop adversarial attacks dedicated to ViTs.
Adversarial attack and defense. Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks (Goodfellow et al., 2014), i.e., imperceptible perturbations onto the inputs can mislead DNNs to make wrong predictions. As adversaries, stronger attacks are continuously developed, including both white-box (Madry et al., 2017; Croce & Hein, 2020; Carlini & Wagner, 2017; Papernot et al., 2016; Moosavi-Dezfooli et al., 2016) and black-box ones (Chen et al., 2017; Ilyas et al., 2018b; Andriushchenko et al., 2020; Guo et al., 2019; Ilyas et al., 2018a), which aggressively degrade the performances of the target DNN models. In particular, (Brown et al., 2017; Liu et al., 2020) build universal adversarial patches that are able to attack different scenes and (Liu et al., 2018; Zhao et al., 2020; Hoory et al., 2020) adopt adversarial patches to attack object detectors. However, these works focus on merely CNNs, questions regarding (1) whether patch-wise attacks
are effective for ViTs as compared to CNNs, and (2) how to efficiently construct strong patch-wise attacks utilizing the unique structures of ViTs are still under-explored yet interesting to be studied, especially considering patches are the basic elements for composing the inputs of ViTs. In response, various defense methods (Guo et al., 2017; Xie et al., 2017; Cohen et al., 2019; Metzen et al., 2017; Feinman et al., 2017; Fu et al., 2021a;b; Shafahi et al., 2019; Madry et al., 2017; Wong et al., 2019) have been proposed to improve DNNs’ robustness against those attacks. The readers are referred to (Akhtar & Mian, 2018; Chakraborty et al., 2018) for more attack and defense methods.
Robustness of vision transformers. Driven by the impressive performance recently achieved by ViTs in various vision tasks, their robustness has gained increasing attention. A consistent observation drawn by pioneering works that study ViTs’ robustness is that ViTs are more robust to adversarial attacks than CNNs since ViTs are more capable of capturing the global interactions among patches, while CNNs focus on local features and thus are more vulnerable to local adversarial perturbations. In particular, (Bhojanapalli et al., 2021) shows that ViT models pretrained with a sufficient amount of data are at least as robust as their ResNet counterparts on a broad range of perturbations, including natural corruptions, distribution shifts, and adversarial perturbations; (Aldahdooh et al., 2021) finds that vanilla ViTs or hybrid-ViTs are more robust than CNNs under Lp-based attacks; and (Shao et al., 2021) further explains that ViTs’ learned features contain less low-level information and are more generalizable, leading to their superior robustness, and introducing convolutional blocks that extract more low-level features will reduce the ViTs’ adversarial robustness. In addition, ViTs’ adversarial transferability has also been studied: (Mahmood et al., 2021) shows that adversarial examples do not readily transfer between CNNs and transformers and (Naseer et al., 2021; Wei et al., 2021) propose techniques to boost the adversarial transferability between ViTs and from ViTs to CNNs. In parallel, (Mao et al., 2021) refines ViTs’ architecture design to improve robustness. In our work, we challenge the common belief that ViTs are more robust than CNNs, which is concluded based on evaluations using existing attack methods, and propose to customize adaptive attacks utilizing ViTs’ captured patch-wise global interactions to make ViTs weaker learners.
3 THE PROPOSED PATCH-FOOL FRAMEWORK
In this section, we present our Patch-Fool attack method that perturbs a whole patch to fool ViTs and unveils a vulnerable perspective of ViTs.
3.1 PATCH-FOOL: VALIDATING AND RETHINKING THE ROBUSTNESS OF VITS
We extensively evaluate the robustness of several representative ViT variants against four state-ofthe-art adversarial attacks (i.e., PGD (Madry et al., 2017), AutoAttack (Croce & Hein, 2020), CWL∞ (Carlini & Wagner, 2017), and CW-L2) with different perturbation strengths in Appendix. A.1. We observe that (1) ViTs are consistently more robust than CNNs with comparable model complexities under all attack methods, which is consistent with the previous observations (Bhojanapalli et al., 2021; Aldahdooh et al., 2021; Shao et al., 2021), and (2) ViT variants equipped with local self-attention ((Swin (Liu et al., 2021a))) or convolutional modules (LeViT (Graham et al., 2021)), which improve the model capability in capturing local features and thus boosts the clean accuracy, are more vulnerable to adversarial attacks, although they are still more robust than CNNs with comparable complexities. This indicates that the global attention mechanism itself can serve as a good robustification technique against existing adversarial attacks, even in lightweight ViTs with small model complexities. For example, as shown in Fig. 1, the gap between the attention maps generated by clean and adversarial inputs in deeper layers remains small. We are curious about “Are the global attentions in ViTs truly robust, or their vulnerability has not been fully explored and exploited?”. To answer this, we propose our customized attack in the following sections.
3.2 PATCH-FOOL: MOTIVATION
Given the insensitivity of ViTs’ self-attention mechanism to local perturbations, we pay a close attention to the basic component (i.e., a single patch) participating in the self-attention calculation, and hypothesize that customized adversarial perturbations onto a patch can be more effective in fooling the captured patch-wise global interactions of self-attention modules than attacking the CNN modules. This is also inspired by the word substitution attacks (Alzantot et al., 2018; Ren et al., 2019; Jin et al., 2019; Zang et al., 2019) to Transformers in NLP tasks, which replace a word with its synonyms, and here an image patch in ViTs serves a similar role as a word.
3.3 PATCH-FOOL: SETUP AND OBJECTIVE FORMULATION
Attack setup. In our proposed Patch-Fool Attack, we do not limit the perturbation strength onto each pixel and, instead, constrain all the perturbed pixels within one patch (or several patches), which can be viewed as a variant of sparse attacks (Dong et al., 2020; Modas et al., 2019; Croce & Hein, 2019). Such attack strategies will lead to adversarial examples with a noisy patch as shown in Fig. 1, which visually resembles and emulates natural corruptions in a small region of the original image, e.g., one noisy patch only counts for 1/196 in the inputs of DeiT-S (Touvron et al., 2021), caused by potential defects of the sensors or potential noises/damages of the optical devices.
Objective formulation. Given the loss function J and a series of input image patches X = [x1, · · · ,xn]⊤ ∈ Rn×d with its associated label y, the objective of our adversarial algorithm can be formulated as:
argmax 1≤p≤n,E∈Rn×d
J(X+ 1p ⊙E, y) (1)
where E denotes the adversarial perturbation, 1p ∈ Rn such that 1p(i) = { 0, i ̸= p 1, i = p is a one hot vector, and ⊙ represents the penetrating face product such that a⊙B = [a ◦ b1, · · · ,a ◦ bd] where ◦ is the Hadamard product and bj is the j-th column of matrix B. For solving Eq. 1, our Patch-Fool needs to (1) select the adversarial patch p, and (2) optimize the corresponding E as elaborated in Sec. 3.4 and Sec. 3.5, respectively.
3.4 PATCH-FOOL: DETERMINE p VIA ATTENTION-AWARE PATCH SELECTION
Denoting a(l,h,i) = [a(l,h,i)1 , · · · , a (l,h,i) n ] ∈ Rn as the attention distribution for the i-th token of the h-th head in the l-th layer. For each layer l, we define:
s (l) j = ∑ h,i a (l,h,i) j (2)
which measures the importance of the j-th token in the l-th layer based on its contributions to other tokens in the self-attention calculation. For better fooling ViTs, we select the most influential patch p derived from argmax
j s (l) j according to a predefined value l. We fix l = 5 by default since the patches
at later self-attention layers are observed to be diverse from the input patches due to the increased information mixed from other patches, making them non-ideal for guiding the selection of input patches as justified in Sec. 4.3.
3.5 PATCH-FOOL: OPTIMIZE E VIA ATTENTION-AWARE LOSS
Given the selected adversarial patch index p from the above step, we define the attention-aware loss for the l-th layer as follows:
J (l) ATTN(X, p) = ∑ h,i a(l,h,i)p (3)
which is expected to be maximized so that the adversarial patch p, serving as the target adversarial patch, can attract more attention from other patches for more effectively fooling ViTs. The perturbation E is then updated based on both the final classification loss, i.e., the cross-entropy loss JCE, and a layer-wise attention-aware loss:
J(X̃, y, p) = JCE(X̃, y) + α ∑ l J (l) ATTN(X̃, p) (4)
where X̃ ≜ X+ 1p ⊙E and α is a weighted coefficient for controlling ∑ l J (l) ATTN(X̃, p). We further adopt PCGrad (Yu et al., 2020) to avoid the gradient conflict of two losses, and thus the update of perturbation E is calculated using the following equation
δE = ∇EJ(X̃, y, p)− α ∑ l βl∇EJCE(X̃, y) (5)
where
βl = 0,
〈 ∇EJCE(X̃, y),∇EJ (l)ATTN(X̃, p) 〉 > 0〈
∇EJCE(X̃,y),∇EJ(l)ATTN(X̃,p) 〉
∥∇EJCE(X̃,y)∥2 , otherwise
(6)
Following PGD (Madry et al., 2017), we iteratively update E using an Adam optimizer (Kingma & Ba, 2014): Et+1 = Et + η ·Adam(δEt) (7) where η is the step size for each update.
3.6 SPARSE PATCH-FOOL: A SPARSE VARIANT OF PATCH-FOOL
Motivation. One natural question associated with Patch-Fool is: “How many pixels within a patch are needed to be perturbed for effectively misleading the model to misclassify the input image?”. There exist two extreme cases: (1) perturbing only a few pixels that lead to local perturbations against which ViTs are more robust, and (2) perturbing the whole patch, i.e., our vanilla Patch-Fool. We hypothesize that answering this question helps better understand under what circumstances ViTs are more (or less) robust than CNNs. To this end, we study a variant of Patch-Fool, dubbed Sparse Patch-Fool, as defined below.
Objective formulation. For enabling Sparse Patch-Fool, we add a sparse constraint to Eq. 1, i.e.: argmax
1≤p≤n,E∈Rn×d,M∈{0,1}n×d J(X+ 1p ⊙ (M ◦E), y) s.t. ∥M∥0 ≤ k (8)
where we use a binary mask M with a predefined sparsity parameter k to control the sparsity of E. To effectively learn the binary distribution of M, we parameterize M as a continuous value M̂, following (Ramanujan et al., 2020; Diffenderfer & Kailkhura, 2021). During forward, only the top k highest elements of M̂ is activated and set to 1 and others are set to 0 to satisfy the target sparsity constraint; and during backward, all the elements in M̂ will be updated via straight-through estimation (Bengio et al., 2013). We jointly optimize M̂ with E as in Eq. 7.
3.7 MILD PATCH-FOOL: A MILD VARIANT OF PATCH-FOOL
In addition to the number of perturbed pixels manipulated by Sparse Patch-Fool, the perturbation strength is another dimension for measuring the perturbations within a patch. We also propose a mild variant of Patch-Fool, dubbed Mild Patch-Fool, with a constraint on the norm of the perturbation E to ensure ∥E∥2 ≤ ϵ or ∥E∥∞ ≤ ϵ which are known as the L2 and L∞ constraint, respectively. We achieve this by scaling (for the L2 constraint) or clipping (for the L∞ constraint) E after updating it.
4 EVALUATION OF PATCH-FOOL
4.1 EVALUATION SETUP
Models and datasets. We mainly benchmark the robustness of the DeiT (Touvron et al., 2021) family with the ResNet (He et al., 2016) family, using their official pretrained models. Note that we adopt DeiT models without distillation for a fair comparison. We randomly select 2500 images from the validation set of ImageNet for evaluating robustness, following (Bhojanapalli et al., 2021).
Patch-Fool settings. The weight coefficient α in Eq. 4 is set as 0.002. The step size η in Eq. 7 is initialized to be 0.2 and decayed by 0.95 every 10 iterations, and the number of total iterations is 250. For evaluating Patch-Fool with different perturbation strengths, we allow Patch-Fool to attack up to four patches based on the attention-aware patch selection in Sec. 3.4, i.e., the patches with top importance scores defined in Eq. 2 will be selected. Note that we report the robust accuracy instead of the attack success rate throughout this paper as our main focus is the robustness benchmark.
4.2 BENCHMARK THE ROBUSTNESS OF VITS AND CNNS AGAINST PATCH-FOOL
We adopt our Patch-Fool to attack ViTs and use the saliency map to guide the patch selection for attacking CNNs, which is the strongest attack setting as shown in Sec. 4.3. The resulting robust accuracy of both DeiT and ResNet families under different numbers of attacked patches is shown in Fig. 2. We can observe that DeiT models are consistently less robust against Patch-Fool than their ResNet counterparts under similar model complexity, e.g., compared with ResNet-50, DeiT-S suffers from a 16.31% robust accuracy drop under the single-patch attack of Patch-Fool, although it has a 3.38% and 18.70% higher clean and robustness accuracy against PGD-20 (ϵ = 0.001), respectively. This indicates that ViTs are not always robust learners as they may underperform under customized perturbations
as compared to CNNs and their seeming robustness against existing attacks can be overturned.
4.3 ABLATION STUDY: EFFECTIVENESS OF THE ATTENTION-AWARE PATCH SELECTION
To validate the effectiveness of our attention-aware patch selection method, we benchmark two variants of patch selection mechanism: (1) random patch selection, and (2) saliencymap-based patch selection. For the latter one, we adopt the averaged saliency score of a patch, defined as the averaged absolution value of the gradients on each pixel in a patch following (Simonyan et al., 2013), as the metric to select patches. For a fair comparison, we only adopt the final cross-entropy loss JCE in Eq. 4 in this set of experiments. As shown in Tab. 2, we can see that (1) among the three strategies for attacking ViTs, our attention-aware patch selection is the most effective strategy in most cases and thus we adopt it by default; (2) DeiT variants are still consistently less robust than their ResNet counterparts under similar model complexity, indi-
cating that attacking the basic component participating in self-attention calculations can indeed effectively degrade ViTs’ robustness; and (3) Patch-Fool equipped with random patch selection, with a 2.64% robust accuracy gap against the best strategy, can already effectively degrade DeiTs’
robustness, while it cannot effectively attack ResNets without the guidance from the saliency map, indicating ViTs are generally more vulnerable than CNNs to patch-wise perturbations.
We also perform an ablation study for l in Eq. 2 based on which layer the attention-aware patch selection is performed. As shown in Tab. 2, selecting the early layers generally achieves consistent better results than that of later layers, which we conjecture is because patches in early layers can still roughly maintain the original information extracted from the inputs while their
counterparts in later layers are mixed with information from other patches, leading to an inferior guidance for selecting the perturbed patch. This conjecture is validated by the observed phase change in the attention map, i.e., after the 6-th layer, more complexity correlations between patches are captured in addition to the diagonal ones. Therefore, we set l = 5 by default.
4.4 ABLATION STUDY: EFFECTIVENESS OF THE ATTENTION-AWARE LOSS
To evaluate the effectiveness of our attentionaware loss with the cosine-similarity-based reweighting mechanism (see Sec. 3.5), we compare it with two baselines: (1) train with only the final cross-entropy loss, i.e., JCE is enabled without the attention-aware loss, and (2) βl = 0, ∀l ∈ [1, 12], i.e., the layerwise J (l)ATTN in Eq. 4 is directly summed together with the final JCE. As shown in Tab. 3, we can observe that (1) our attention-aware loss equipped with the cosine-
similarity-based re-weighting strategy consistently achieves the best attack performance, e.g., a 3.17% reduction in robust accuracy compared with the baseline without attention-aware loss; and (2) directly summing up all the losses leads to poor convergence especially under limited perturbed patches.
4.5 BENCHMARK AGAINST SPARSE PATCH-FOOL
Setup. To study the influence of the sparsity of perturbed pixels for both CNNs and ViTs, we evaluate our proposed Sparse Patch-Fool via varying the global perturbation ratio (PR) of the whole image (i.e., k/total-pixel) as well as the number of patches allowed to be perturbed.
Benchmark the robustness of ViTs and CNNs. As shown in Tab. 4, under different perturbation ratios and numbers of perturbed patches, neither ViTs nor CNNs will always be the winner in robustness. In particular, under relatively small perturbation ratios or more perturbed patches (e.g., when all patches are allowed to be perturbed), CNNs will suffer from worse robustness, while ViTs will be more vulnerable learners under relatively large perturbation ratios as well as fewer perturbed patches.
Influence of the number of perturbed patches. We further study the influence of the number of perturbed patches under the same global perturbation ratio as shown in Tab. 5. We can see that (1) under a small perturbation ratio of 0.05%
which is closer to local perturbations, CNNs are the consistent loser in robustness; and (2) under a relatively large perturbation ratio of 0.5%, although increasing the number of perturbed patches leads to a consistent reduction in CNNs’ robust accuracy, the robustness reduction for ViTs will quickly saturate, i.e., ViTs gradually switch from the loser to the winner in robustness as compared to CNNs.
Insights. We analyze that smaller perturbation ratios under the same number of perturbed patches or more perturbed patches under the same perturbation ratio will lead to less perturbed pixels within one patch, i.e., a lower perturbation density, which is closer to local perturbations against which ViTs are thus more robust than CNNs. In contrast, given more perturbed pixels in one patch, i.e., a higher perturbation density for which an extreme case is our vanilla Patch-Fool, ViTs become more vulnerable learners than CNNs. This indicates that a high perturbation density can be a perspective for exploring ViTs’ vulnerability, which has been neglected by existing adversarial attacks.
Considering the perturbation strength is another dimension for measuring the perturbations within a patch in addition to the perturbation density, we evaluate our proposed Mild Patch-Fool in Sec. 4.6.
4.6 BENCHMARK AGAINST MILD PATCH-FOOL
Setup. To study the influence of the perturbation strength within each patch, we evaluate our proposed Mild Patch-Fool in Sec. 4.6 with L2 or L∞ constraints on the patch-wise perturbations with
different strengths indicated by ϵ. Note that the perturbation strength ϵ of L2-based Mild Patch-Fool is summarized over all perturbed patches. We benchmark both the DeiT and ResNet families with different numbers of perturbed patches as shown in Tab. 6 and Tab. 7.
Observations and analysis. We can observe that (1) the robust accuracy will be degraded more by larger perturbation strength indicated by ϵ under both L2 and L∞ constraints, and (2) more importantly, DeiTs are more robust than ResNets under small ϵ, and gradually become more vulnerable than ResNets as ϵ increases. For example, as gradually increasing ϵ from 8/255 to 128/255 under L∞ attacks, DeiT-S switches from the winner to the loser in robustness as compared to ResNet-50.
Insights. This set of experiments, together with the analysis in Sec. 4.5, reflects that the perturbation density and the perturbation strength are two key determinators for the robustness ranking between ViTs and CNNs: higher/lower perturbation density or perturbation strength will make ViTs the loser/winner in robustness. This first-time finding can enhance the understanding about the robustness ranking between ViTs and CNNs, and aid the decision making about which models to deploy in different real-world scenarios with high security awareness.
We also benchmark the effectiveness of Patch-Fool on top of adversarial trained ViTs/CNNs, evaluate the patch-wise adversarial transferability of Patch-Fool, and visualize the adversarial examples generated by our Patch-Fool’s different variants in Appendix. A.2∼ A.4, respectively.
5 CONCLUSION
The recent breakthroughs achieved by ViTs in various vision tasks have attracted an increasing attention on ViTs’ robustness, aiming to fulfill the goal of deploying ViTs into real-world vision applications. In this work, we provide a new perspective regarding ViTs’ robustness and propose a novel attack framework, dubbed Patch-Fool, to attack the basic component (i.e., a single patch) in ViTs’ self-attention calculations, against which ViTs are found to be more vulnerable than CNNs. Interestingly, the proposed Sparse Patch-Fool and Mild Patch-Fool attacks, two variants of our Patch-Fool, further indicate that the perturbation density and perturbation strength onto each patch seem to be the two key factors that determine the robustness ranking between ViTs and CNNs. We believe this work can shed light on better understanding ViTs’ robustness and inspire innovative defense techniques.
ACKNOWLEDGEMENT
The work is supported by the National Science Foundation (NSF) through the MLWiNS program (Award number: 2003137), an NSF CAREER award (Award number: 2048183), and the RTML program (Award number: 1937592).
A APPENDIX
A.1 EVALUATING ROBUSTNESS OF VITS AND CNNS UNDER EXISTING ATTACKS
Although various comparisons on the robustness of ViTs and CNNs have been explored in pioneering works (Bhojanapalli et al., 2021; Aldahdooh et al., 2021; Shao et al., 2021), their evaluation suffers from one of the following limitations: (1) only adopt weak attack methods, (2) only adopt early ViT designs without considering recently advanced ViT architectures, and (3) do not adopt the official and latest pretrained models and suffer from inferior clean accuracies. To this end, we extensively evaluate the robustness against common white-box attacks of several representative ViT variants, which cover the popular trends in designing ViT architectures, including (1) using local self-attention (Swin (Liu et al., 2021a)), which adopts the attention mechanism within a local region instead of the global ones in vanilla ViTs to capture low-level features and reduce the computational cost, and (2) introducing the inductive bias of CNNs to build hybrid models (LeViT (Graham et al., 2021)).
A.1.1 EVALUATION SETUP Models and datasets. We evaluate the robustness of three ViT families (i.e., DeiT (Touvron et al., 2021), Swin (Liu et al., 2021a), and LeViT (Graham et al., 2021)) and two CNN families (ResNet (He et al., 2016) and VGG (Simonyan & Zisserman, 2014)) on ImageNet using their official implementation and pretrained models. Note that we adopt DeiT models without distillation, which only improves the training schedule over vanilla ViTs, for a fair comparison.
Attack settings. We adopt four adversarial attacks (i.e., PGD (Madry et al., 2017), AutoAttack (Croce & Hein, 2020), CW-L∞ (Carlini & Wagner, 2017), and CW-L2) with different perturbation strengths. In particular, for the CW-L∞ and CW-L2 attacks, we adopt the implementation in AdverTorch (Ding et al., 2019) and the same settings as (Chen et al., 2021a; Rony et al., 2019); For AutoAttack, we adopt the official implementation and default settings in (Croce & Hein, 2020).
A.1.2 OBSERVATIONS AND ANALYSIS Observations. From the evaluation results summarized in Tab. 8, we make the following observations: (1) ViTs are consistently more robust than CNNs with comparable model complexities under all attack methods, which is consistent with the previous observations (Bhojanapalli et al., 2021; Aldahdooh et al., 2021; Shao et al., 2021). In particular, DeiT-S/DeiT-B achieves a 18.70%/10.11% higher robust accuracy over ResNet-50/ResNet-152 under PGD-20 attacks with a perturbation strength of 0.001; (2) compared with vanilla ViTs, ViT variants equipped with local self-attention or convolutional modules, which improves the model capability to capture local features and thus boosts the clean accuracy, are more vulnerable to adversarial attacks, although they are still more robust than CNNs with comparable complexities. For example, Swin-T/Swin-B suffers from a 14.90%/5.04% robust accuracy drop compared with DeiT-S/DeiT-B under PGD-20 attacks with a perturbation strength of 0.001; and (3) the degree of overparameterization has less influence in the robustness for the same
family of ViT models compared with its great influence in CNNs’ robustness, as the most lightweight DeiT-Ti can already achieve a comparable robust accuracy (-0.58%) as ResNet-152, while requiring 9.17×/46× less floating-point operations (FLOPs)/parameters. Analysis. Combining the three insights drawn from the aforementioned observations, we can observe the superiority of the global attention mechanism over convolutional and local self-attention blocks, in terms of both improved robust accuracy and reduced sensitivity to the degree of model overparameterization. This indicates that the global attention mechanism itself can serve as a good robustification technique against existing adversarial attacks, even in lightweight ViTs with small model complexities. For example, as shown in Fig. 1, the gap between the attention maps generated by clean and adversarial inputs in deeper layers remains small. We wonder that "Are the global attentions in ViTs truly robust, or their vulnerability has not been fully explored and exploited?". To answer this, we propose our customized attack Patch-Fool in Sec. 3 and find that the vulnerability of global attentions can be utilized to degrade the robustness of ViTs, making them more vulnerable learners than CNNs.
A.2 PATCH-FOOL ON TOP OF ADVERSARIALLY TRAINED MODELS
To study the influence of robust training algorithms against our Patch-Fool, we further benchmark the robustness of both adversarially trained ViTs and CNNs.
Setup. We apply Fast Adversarial Training (FAT) (Wong et al., 2019) with an ϵ of 2/255 and 4/255 under the L∞ constraint on top of both DeiT-Ti and ResNet-18 on ImageNet. We report the robust accuracy of the FAT trained models against our Patch-Fool in Tabs. 9 and 10.
Observations and analysis. From Tab. 9, we can observe that although FAT improves the robustness of both DeiT-Ti and ResNet-18 against our Patch-Fool attacks, DeiT-Ti is still more vulnerable against Patch-Fool than ResNet-18 under the same number of perturbed patches. In addition, we can observe from Tab. 10 that (1) stronger adversarial training with larger ϵ leads to better robustness against both PGD attacks and our Patch-Fool, and (2) the improvement in robust accuracy against PGD attacks is higher than the one against Patch-Fool, indicating that enhanced adversarial training schemes or other defense methods are required to robustify ViTs against our Patch-Fool, which is also our future work.
A.3 PATCH-WISE ADVERSARIAL TRANSFERABILITY OF PATCH-FOOL
We further discuss the patch-wise adversarial transferability of Patch-Fool, i.e., transfer the perturbations generated for attacking one specific patch to attack other patches on the same image.
Setup. Without losing generality, we generate the adversarial perturbation for the center patch with Patch-Fool which is adopted to attack all other patches on the same image and the resulting robust accuracy is annotated in Fig. 3. We average the robust accuracy at each patch location over a batch of 128 images.
Observations. We can observe that the adversarial patches generated by Patch-Fool can be transferred to neighboring patches with more notable accuracy degradation, while the adversarial transferability between patches far away from each other is poor.
A.4 VISUALIZING THE ADVERSARIAL EXAMPLES GENERATED BY PATCH-FOOL’S VARIANTS
Here we visualize the adversarial examples generated by Patch-Fool’s variants in Fig. 4, including (1) Patch-Fool with different number of perturbed patches (rows 2∼3), (2) Sparse Patch-Fool with a total of 250 perturbed pixels distributed in different number of perturbed patches (rows 4∼6), and (3) Mild Patch-Fool under L2 and L∞ constraints (rows 7∼8). The corresponding robust accuracy is also annotated.
Observations. From the aforementioned visualization in Fig. 4, we can observe that (1) the adversarial patches generated by Patch-Fool visually resemble and emulate natural corruptions in a small region of the original image caused by potential defects of the sensors or potential noises/damages of the optical devices (see row 2), (2) more perturbed patches lead to a lower robust accuracy and worse imperceptibility (see row 3), (3) the generated adversarial perturbations of our Sparse Patch-Fool resemble impulse noises, which improves imperceptibility while still notably degrading the robust accuracy especially when perturbing more patches (see rows 4∼6), and (4) adding L2 and L∞ constraints will notably improve the imperceptibility while incurring less degradation in the robust accuracy (rows 7∼8). | 1. What is the focus of the paper regarding the robustness of vision transformers?
2. What are the strengths of the proposed approach, particularly in identifying the vulnerability of ViT?
3. What are the weaknesses of the paper, especially regarding the lack of consideration for robust training?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper studies the robustness of vision transformers (ViT) from the perspective of adversarial attacks on patches, where the attack algorithm is particularly designed to fool the attention mechanism. While some prior works show that ViT has better adversarial robustness compared to CNNs, this paper shows that ViT has worse robustness against patch attacks, when only a few patches are manipulated and the perturbations are dense within the patches.
Review
Strengths
This paper proposes a particular algorithm to attack vision transformers by considering the attention mechanism.
This paper successfully identifies the vulnerability of ViT against dense patch attacks, while ViT was more robust under traditional Lp perturbations or natural perturbations in previous works.
The paper has comprehensive experiments on benchmarking the robustness of ViT and CNN models, in terms of different patch selection strategy, number of perturbed patches, the sparsity of perturbation within each patch.
Weaknesses
This paper doesn't consider the robust training for ViT to improve the robustness and evaluate the robustness of more robust models.
I think "ARE VISION TRANSFORMERS ALWAYS ROBUST AGAINST ADVERSARIAL PERTURBATIONS?" in the title is not very meaningful, because it's known that it's impossible for a model to be robust against all perturbations so far, and even on the perturbations used in existing works, ViT is not robust. ViT was just relatively more robust than CNN. |
ICLR | Title
Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations?
Abstract
Vision transformers (ViTs) have recently set off a new wave in neural architecture design thanks to their record-breaking performance in various vision tasks. In parallel, to fulfill the goal of deploying ViTs into real-world vision applications, their robustness against potential malicious attacks has gained increasing attention. In particular, recent works show that ViTs are more robust against adversarial attacks as compared with convolutional neural networks (CNNs), and conjecture that this is because ViTs focus more on capturing global interactions among different input/feature patches, leading to their improved robustness to local perturbations imposed by adversarial attacks. In this work, we ask an intriguing question: “Under what kinds of perturbations do ViTs become more vulnerable learners compared to CNNs?” Driven by this question, we first conduct a comprehensive experiment regarding the robustness of both ViTs and CNNs under various existing adversarial attacks to understand the underlying reason favoring their robustness. Based on the drawn insights, we then propose a dedicated attack framework, dubbed Patch-Fool, that fools the self-attention mechanism by attacking its basic component (i.e., a single patch) with a series of attention-aware optimization techniques. Interestingly, our Patch-Fool framework shows for the first time that ViTs are not necessarily more robust than CNNs against adversarial perturbations. In particular, we find that ViTs are more vulnerable learners compared with CNNs against our Patch-Fool attack which is consistent across extensive experiments, and the observations from Sparse/Mild Patch-Fool, two variants of Patch-Fool, indicate an intriguing insight that the perturbation density and strength on each patch seem to be the key factors that influence the robustness ranking between ViTs and CNNs. It can be expected that our Patch-Fool framework will shed light on both future architecture designs and training schemes for robustifying ViTs towards their real-world deployment. Our codes are available at https://github.com/RICE-EIC/Patch-Fool.
1 INTRODUCTION
The recent performance breakthroughs achieved by vision transformers (ViTs) (Dosovitskiy et al., 2020) have fueled an increasing enthusiasm towards designing new ViT architectures for different vision tasks, including object detection (Carion et al., 2020; Beal et al., 2020), semantic segmentation (Strudel et al., 2021; Zheng et al., 2021; Wang et al., 2021), and video recognition (Arnab et al., 2021; Liu et al., 2021b; Li et al., 2021b; Fan et al., 2021). To fulfill the goal of deploying ViTs into real-world vision applications, the security concern of ViTs is of great importance and challenge, especially in the context of adversarial attacks (Goodfellow et al., 2014), under which an imperceptible perturbation onto the inputs can mislead the models to malfunction.
In response, the robustness of ViTs against adversarial attacks has attracted increasing attention. For example, recent works (Bhojanapalli et al., 2021; Aldahdooh et al., 2021; Shao et al., 2021) find that in addition to ViTs’ decent task performances, they are more robust to adversarial attacks compared with convolutional neural networks (CNNs) under comparable model complexities. In particular, (Shao et al., 2021) claims that ViTs focus more on capturing the global interaction among
∗Equal contribution.
input/feature patches via its self-attention mechanism and the learned features contain less low-level information, leading to superior robustness to the local perturbations introduced by adversarial attacks. A natural response to this seemingly good news would be determining whether ViTs are truly robust against all kinds of adversarial perturbations or if their current win in robustness is an inevitable result of biased evaluations using existing attack methods that are mostly dedicated to CNNs. To unveil the potential vulnerability of ViTs, this work takes the first step in asking an intriguing question: “Under what kinds of perturbations do ViTs become more vulnerable learners compared to CNNs?”, and makes the following contributions:
• We propose a new attack framework, dubbed Patch-Fool, aiming to fool the self-attention mechanism by attacking the basic component (i.e., a single patch) participating in ViTs’ self-attention calculations. Our Patch-Fool attack features a novel objective formulation, which is then solved by Patch-Fool’s integrated attention-aware patch selection technique and attention-aware loss design;
• We evaluate the robustness of both ViTs and CNNs against our Patch-Fool attack with extensive experiments and find that ViTs are consistently less robust than CNNs across various attack settings, indicating that ViTs are not always robust learners and their seeming robustness against existing attacks can be overturned under dedicated adversarial attacks;
• We further benchmark the robustness of both ViTs and CNNs under two variants of PatchFool, i.e., Sparse Patch-Fool and Mild Patch-Fool, and discover that the perturbation density, defined as the number of perturbed pixels per patch, and the perturbation strength highly influence the robustness ranking between ViTs and CNNs, where our Patch-Fool is an extreme case of high perturbation density and strength.
We believe our work has opened up a new perspective for exploring ViTs’ vulnerability and understanding the different behaviors of CNNs and ViTs under adversarial attacks, and can provide insights to both future architecture designs and training schemes for robustifying ViTs towards their real-world deployment.
2 RELATED WORKS
Vision transformers. Motivated by the great success of Transformers in the natural language processing (NLP) field (Vaswani et al., 2017), ViTs have been developed by splitting an input image into a series of image patches and adopting self-attention modules for encoding the image (Dosovitskiy et al., 2020), and been shown to achieve competitive or superior performance over CNNs via dedicated data augmentation (Touvron et al., 2021) or self-attention structures (Yang et al., 2021; Graham et al., 2021; Liu et al., 2021a). As such, there has been tremendously increased attention on applying ViTs to various computer vision applications, such as self-supervised learning (Caron et al., 2021; Chen et al., 2021b; Xie et al., 2021; Li et al., 2021a), object detection (Carion et al., 2020; Beal et al., 2020), and semantic segmentation (Strudel et al., 2021; Zheng et al., 2021; Wang et al., 2021). The achievable performance of ViTs are continuously refreshed by emerging ViT variants, which provide new arts for designing ViT architectures. For example, convolutional modules have been incorporated into ViTs for capturing low-level features (Xiao et al., 2021; Wu et al., 2021; Graham et al., 2021; Peng et al., 2021), and replacing the global self-attention mechanism with local self-attention modules (Liu et al., 2021a; Dong et al., 2021; Liang et al., 2021; Liu et al., 2021b; Chu et al., 2021) has further pushed forward ViTs’ achievable accuracy-efficiency trade-off. Motivated by the growing interest in deploying ViTs into real-world applications, this work aims to better understand the robustness of ViTs and to develop adversarial attacks dedicated to ViTs.
Adversarial attack and defense. Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks (Goodfellow et al., 2014), i.e., imperceptible perturbations onto the inputs can mislead DNNs to make wrong predictions. As adversaries, stronger attacks are continuously developed, including both white-box (Madry et al., 2017; Croce & Hein, 2020; Carlini & Wagner, 2017; Papernot et al., 2016; Moosavi-Dezfooli et al., 2016) and black-box ones (Chen et al., 2017; Ilyas et al., 2018b; Andriushchenko et al., 2020; Guo et al., 2019; Ilyas et al., 2018a), which aggressively degrade the performances of the target DNN models. In particular, (Brown et al., 2017; Liu et al., 2020) build universal adversarial patches that are able to attack different scenes and (Liu et al., 2018; Zhao et al., 2020; Hoory et al., 2020) adopt adversarial patches to attack object detectors. However, these works focus on merely CNNs, questions regarding (1) whether patch-wise attacks
are effective for ViTs as compared to CNNs, and (2) how to efficiently construct strong patch-wise attacks utilizing the unique structures of ViTs are still under-explored yet interesting to be studied, especially considering patches are the basic elements for composing the inputs of ViTs. In response, various defense methods (Guo et al., 2017; Xie et al., 2017; Cohen et al., 2019; Metzen et al., 2017; Feinman et al., 2017; Fu et al., 2021a;b; Shafahi et al., 2019; Madry et al., 2017; Wong et al., 2019) have been proposed to improve DNNs’ robustness against those attacks. The readers are referred to (Akhtar & Mian, 2018; Chakraborty et al., 2018) for more attack and defense methods.
Robustness of vision transformers. Driven by the impressive performance recently achieved by ViTs in various vision tasks, their robustness has gained increasing attention. A consistent observation drawn by pioneering works that study ViTs’ robustness is that ViTs are more robust to adversarial attacks than CNNs since ViTs are more capable of capturing the global interactions among patches, while CNNs focus on local features and thus are more vulnerable to local adversarial perturbations. In particular, (Bhojanapalli et al., 2021) shows that ViT models pretrained with a sufficient amount of data are at least as robust as their ResNet counterparts on a broad range of perturbations, including natural corruptions, distribution shifts, and adversarial perturbations; (Aldahdooh et al., 2021) finds that vanilla ViTs or hybrid-ViTs are more robust than CNNs under Lp-based attacks; and (Shao et al., 2021) further explains that ViTs’ learned features contain less low-level information and are more generalizable, leading to their superior robustness, and introducing convolutional blocks that extract more low-level features will reduce the ViTs’ adversarial robustness. In addition, ViTs’ adversarial transferability has also been studied: (Mahmood et al., 2021) shows that adversarial examples do not readily transfer between CNNs and transformers and (Naseer et al., 2021; Wei et al., 2021) propose techniques to boost the adversarial transferability between ViTs and from ViTs to CNNs. In parallel, (Mao et al., 2021) refines ViTs’ architecture design to improve robustness. In our work, we challenge the common belief that ViTs are more robust than CNNs, which is concluded based on evaluations using existing attack methods, and propose to customize adaptive attacks utilizing ViTs’ captured patch-wise global interactions to make ViTs weaker learners.
3 THE PROPOSED PATCH-FOOL FRAMEWORK
In this section, we present our Patch-Fool attack method that perturbs a whole patch to fool ViTs and unveils a vulnerable perspective of ViTs.
3.1 PATCH-FOOL: VALIDATING AND RETHINKING THE ROBUSTNESS OF VITS
We extensively evaluate the robustness of several representative ViT variants against four state-ofthe-art adversarial attacks (i.e., PGD (Madry et al., 2017), AutoAttack (Croce & Hein, 2020), CWL∞ (Carlini & Wagner, 2017), and CW-L2) with different perturbation strengths in Appendix. A.1. We observe that (1) ViTs are consistently more robust than CNNs with comparable model complexities under all attack methods, which is consistent with the previous observations (Bhojanapalli et al., 2021; Aldahdooh et al., 2021; Shao et al., 2021), and (2) ViT variants equipped with local self-attention ((Swin (Liu et al., 2021a))) or convolutional modules (LeViT (Graham et al., 2021)), which improve the model capability in capturing local features and thus boosts the clean accuracy, are more vulnerable to adversarial attacks, although they are still more robust than CNNs with comparable complexities. This indicates that the global attention mechanism itself can serve as a good robustification technique against existing adversarial attacks, even in lightweight ViTs with small model complexities. For example, as shown in Fig. 1, the gap between the attention maps generated by clean and adversarial inputs in deeper layers remains small. We are curious about “Are the global attentions in ViTs truly robust, or their vulnerability has not been fully explored and exploited?”. To answer this, we propose our customized attack in the following sections.
3.2 PATCH-FOOL: MOTIVATION
Given the insensitivity of ViTs’ self-attention mechanism to local perturbations, we pay a close attention to the basic component (i.e., a single patch) participating in the self-attention calculation, and hypothesize that customized adversarial perturbations onto a patch can be more effective in fooling the captured patch-wise global interactions of self-attention modules than attacking the CNN modules. This is also inspired by the word substitution attacks (Alzantot et al., 2018; Ren et al., 2019; Jin et al., 2019; Zang et al., 2019) to Transformers in NLP tasks, which replace a word with its synonyms, and here an image patch in ViTs serves a similar role as a word.
3.3 PATCH-FOOL: SETUP AND OBJECTIVE FORMULATION
Attack setup. In our proposed Patch-Fool Attack, we do not limit the perturbation strength onto each pixel and, instead, constrain all the perturbed pixels within one patch (or several patches), which can be viewed as a variant of sparse attacks (Dong et al., 2020; Modas et al., 2019; Croce & Hein, 2019). Such attack strategies will lead to adversarial examples with a noisy patch as shown in Fig. 1, which visually resembles and emulates natural corruptions in a small region of the original image, e.g., one noisy patch only counts for 1/196 in the inputs of DeiT-S (Touvron et al., 2021), caused by potential defects of the sensors or potential noises/damages of the optical devices.
Objective formulation. Given the loss function J and a series of input image patches X = [x1, · · · ,xn]⊤ ∈ Rn×d with its associated label y, the objective of our adversarial algorithm can be formulated as:
argmax 1≤p≤n,E∈Rn×d
J(X+ 1p ⊙E, y) (1)
where E denotes the adversarial perturbation, 1p ∈ Rn such that 1p(i) = { 0, i ̸= p 1, i = p is a one hot vector, and ⊙ represents the penetrating face product such that a⊙B = [a ◦ b1, · · · ,a ◦ bd] where ◦ is the Hadamard product and bj is the j-th column of matrix B. For solving Eq. 1, our Patch-Fool needs to (1) select the adversarial patch p, and (2) optimize the corresponding E as elaborated in Sec. 3.4 and Sec. 3.5, respectively.
3.4 PATCH-FOOL: DETERMINE p VIA ATTENTION-AWARE PATCH SELECTION
Denoting a(l,h,i) = [a(l,h,i)1 , · · · , a (l,h,i) n ] ∈ Rn as the attention distribution for the i-th token of the h-th head in the l-th layer. For each layer l, we define:
s (l) j = ∑ h,i a (l,h,i) j (2)
which measures the importance of the j-th token in the l-th layer based on its contributions to other tokens in the self-attention calculation. For better fooling ViTs, we select the most influential patch p derived from argmax
j s (l) j according to a predefined value l. We fix l = 5 by default since the patches
at later self-attention layers are observed to be diverse from the input patches due to the increased information mixed from other patches, making them non-ideal for guiding the selection of input patches as justified in Sec. 4.3.
3.5 PATCH-FOOL: OPTIMIZE E VIA ATTENTION-AWARE LOSS
Given the selected adversarial patch index p from the above step, we define the attention-aware loss for the l-th layer as follows:
J (l) ATTN(X, p) = ∑ h,i a(l,h,i)p (3)
which is expected to be maximized so that the adversarial patch p, serving as the target adversarial patch, can attract more attention from other patches for more effectively fooling ViTs. The perturbation E is then updated based on both the final classification loss, i.e., the cross-entropy loss JCE, and a layer-wise attention-aware loss:
J(X̃, y, p) = JCE(X̃, y) + α ∑ l J (l) ATTN(X̃, p) (4)
where X̃ ≜ X+ 1p ⊙E and α is a weighted coefficient for controlling ∑ l J (l) ATTN(X̃, p). We further adopt PCGrad (Yu et al., 2020) to avoid the gradient conflict of two losses, and thus the update of perturbation E is calculated using the following equation
δE = ∇EJ(X̃, y, p)− α ∑ l βl∇EJCE(X̃, y) (5)
where
βl = 0,
〈 ∇EJCE(X̃, y),∇EJ (l)ATTN(X̃, p) 〉 > 0〈
∇EJCE(X̃,y),∇EJ(l)ATTN(X̃,p) 〉
∥∇EJCE(X̃,y)∥2 , otherwise
(6)
Following PGD (Madry et al., 2017), we iteratively update E using an Adam optimizer (Kingma & Ba, 2014): Et+1 = Et + η ·Adam(δEt) (7) where η is the step size for each update.
3.6 SPARSE PATCH-FOOL: A SPARSE VARIANT OF PATCH-FOOL
Motivation. One natural question associated with Patch-Fool is: “How many pixels within a patch are needed to be perturbed for effectively misleading the model to misclassify the input image?”. There exist two extreme cases: (1) perturbing only a few pixels that lead to local perturbations against which ViTs are more robust, and (2) perturbing the whole patch, i.e., our vanilla Patch-Fool. We hypothesize that answering this question helps better understand under what circumstances ViTs are more (or less) robust than CNNs. To this end, we study a variant of Patch-Fool, dubbed Sparse Patch-Fool, as defined below.
Objective formulation. For enabling Sparse Patch-Fool, we add a sparse constraint to Eq. 1, i.e.: argmax
1≤p≤n,E∈Rn×d,M∈{0,1}n×d J(X+ 1p ⊙ (M ◦E), y) s.t. ∥M∥0 ≤ k (8)
where we use a binary mask M with a predefined sparsity parameter k to control the sparsity of E. To effectively learn the binary distribution of M, we parameterize M as a continuous value M̂, following (Ramanujan et al., 2020; Diffenderfer & Kailkhura, 2021). During forward, only the top k highest elements of M̂ is activated and set to 1 and others are set to 0 to satisfy the target sparsity constraint; and during backward, all the elements in M̂ will be updated via straight-through estimation (Bengio et al., 2013). We jointly optimize M̂ with E as in Eq. 7.
3.7 MILD PATCH-FOOL: A MILD VARIANT OF PATCH-FOOL
In addition to the number of perturbed pixels manipulated by Sparse Patch-Fool, the perturbation strength is another dimension for measuring the perturbations within a patch. We also propose a mild variant of Patch-Fool, dubbed Mild Patch-Fool, with a constraint on the norm of the perturbation E to ensure ∥E∥2 ≤ ϵ or ∥E∥∞ ≤ ϵ which are known as the L2 and L∞ constraint, respectively. We achieve this by scaling (for the L2 constraint) or clipping (for the L∞ constraint) E after updating it.
4 EVALUATION OF PATCH-FOOL
4.1 EVALUATION SETUP
Models and datasets. We mainly benchmark the robustness of the DeiT (Touvron et al., 2021) family with the ResNet (He et al., 2016) family, using their official pretrained models. Note that we adopt DeiT models without distillation for a fair comparison. We randomly select 2500 images from the validation set of ImageNet for evaluating robustness, following (Bhojanapalli et al., 2021).
Patch-Fool settings. The weight coefficient α in Eq. 4 is set as 0.002. The step size η in Eq. 7 is initialized to be 0.2 and decayed by 0.95 every 10 iterations, and the number of total iterations is 250. For evaluating Patch-Fool with different perturbation strengths, we allow Patch-Fool to attack up to four patches based on the attention-aware patch selection in Sec. 3.4, i.e., the patches with top importance scores defined in Eq. 2 will be selected. Note that we report the robust accuracy instead of the attack success rate throughout this paper as our main focus is the robustness benchmark.
4.2 BENCHMARK THE ROBUSTNESS OF VITS AND CNNS AGAINST PATCH-FOOL
We adopt our Patch-Fool to attack ViTs and use the saliency map to guide the patch selection for attacking CNNs, which is the strongest attack setting as shown in Sec. 4.3. The resulting robust accuracy of both DeiT and ResNet families under different numbers of attacked patches is shown in Fig. 2. We can observe that DeiT models are consistently less robust against Patch-Fool than their ResNet counterparts under similar model complexity, e.g., compared with ResNet-50, DeiT-S suffers from a 16.31% robust accuracy drop under the single-patch attack of Patch-Fool, although it has a 3.38% and 18.70% higher clean and robustness accuracy against PGD-20 (ϵ = 0.001), respectively. This indicates that ViTs are not always robust learners as they may underperform under customized perturbations
as compared to CNNs and their seeming robustness against existing attacks can be overturned.
4.3 ABLATION STUDY: EFFECTIVENESS OF THE ATTENTION-AWARE PATCH SELECTION
To validate the effectiveness of our attention-aware patch selection method, we benchmark two variants of patch selection mechanism: (1) random patch selection, and (2) saliencymap-based patch selection. For the latter one, we adopt the averaged saliency score of a patch, defined as the averaged absolution value of the gradients on each pixel in a patch following (Simonyan et al., 2013), as the metric to select patches. For a fair comparison, we only adopt the final cross-entropy loss JCE in Eq. 4 in this set of experiments. As shown in Tab. 2, we can see that (1) among the three strategies for attacking ViTs, our attention-aware patch selection is the most effective strategy in most cases and thus we adopt it by default; (2) DeiT variants are still consistently less robust than their ResNet counterparts under similar model complexity, indi-
cating that attacking the basic component participating in self-attention calculations can indeed effectively degrade ViTs’ robustness; and (3) Patch-Fool equipped with random patch selection, with a 2.64% robust accuracy gap against the best strategy, can already effectively degrade DeiTs’
robustness, while it cannot effectively attack ResNets without the guidance from the saliency map, indicating ViTs are generally more vulnerable than CNNs to patch-wise perturbations.
We also perform an ablation study for l in Eq. 2 based on which layer the attention-aware patch selection is performed. As shown in Tab. 2, selecting the early layers generally achieves consistent better results than that of later layers, which we conjecture is because patches in early layers can still roughly maintain the original information extracted from the inputs while their
counterparts in later layers are mixed with information from other patches, leading to an inferior guidance for selecting the perturbed patch. This conjecture is validated by the observed phase change in the attention map, i.e., after the 6-th layer, more complexity correlations between patches are captured in addition to the diagonal ones. Therefore, we set l = 5 by default.
4.4 ABLATION STUDY: EFFECTIVENESS OF THE ATTENTION-AWARE LOSS
To evaluate the effectiveness of our attentionaware loss with the cosine-similarity-based reweighting mechanism (see Sec. 3.5), we compare it with two baselines: (1) train with only the final cross-entropy loss, i.e., JCE is enabled without the attention-aware loss, and (2) βl = 0, ∀l ∈ [1, 12], i.e., the layerwise J (l)ATTN in Eq. 4 is directly summed together with the final JCE. As shown in Tab. 3, we can observe that (1) our attention-aware loss equipped with the cosine-
similarity-based re-weighting strategy consistently achieves the best attack performance, e.g., a 3.17% reduction in robust accuracy compared with the baseline without attention-aware loss; and (2) directly summing up all the losses leads to poor convergence especially under limited perturbed patches.
4.5 BENCHMARK AGAINST SPARSE PATCH-FOOL
Setup. To study the influence of the sparsity of perturbed pixels for both CNNs and ViTs, we evaluate our proposed Sparse Patch-Fool via varying the global perturbation ratio (PR) of the whole image (i.e., k/total-pixel) as well as the number of patches allowed to be perturbed.
Benchmark the robustness of ViTs and CNNs. As shown in Tab. 4, under different perturbation ratios and numbers of perturbed patches, neither ViTs nor CNNs will always be the winner in robustness. In particular, under relatively small perturbation ratios or more perturbed patches (e.g., when all patches are allowed to be perturbed), CNNs will suffer from worse robustness, while ViTs will be more vulnerable learners under relatively large perturbation ratios as well as fewer perturbed patches.
Influence of the number of perturbed patches. We further study the influence of the number of perturbed patches under the same global perturbation ratio as shown in Tab. 5. We can see that (1) under a small perturbation ratio of 0.05%
which is closer to local perturbations, CNNs are the consistent loser in robustness; and (2) under a relatively large perturbation ratio of 0.5%, although increasing the number of perturbed patches leads to a consistent reduction in CNNs’ robust accuracy, the robustness reduction for ViTs will quickly saturate, i.e., ViTs gradually switch from the loser to the winner in robustness as compared to CNNs.
Insights. We analyze that smaller perturbation ratios under the same number of perturbed patches or more perturbed patches under the same perturbation ratio will lead to less perturbed pixels within one patch, i.e., a lower perturbation density, which is closer to local perturbations against which ViTs are thus more robust than CNNs. In contrast, given more perturbed pixels in one patch, i.e., a higher perturbation density for which an extreme case is our vanilla Patch-Fool, ViTs become more vulnerable learners than CNNs. This indicates that a high perturbation density can be a perspective for exploring ViTs’ vulnerability, which has been neglected by existing adversarial attacks.
Considering the perturbation strength is another dimension for measuring the perturbations within a patch in addition to the perturbation density, we evaluate our proposed Mild Patch-Fool in Sec. 4.6.
4.6 BENCHMARK AGAINST MILD PATCH-FOOL
Setup. To study the influence of the perturbation strength within each patch, we evaluate our proposed Mild Patch-Fool in Sec. 4.6 with L2 or L∞ constraints on the patch-wise perturbations with
different strengths indicated by ϵ. Note that the perturbation strength ϵ of L2-based Mild Patch-Fool is summarized over all perturbed patches. We benchmark both the DeiT and ResNet families with different numbers of perturbed patches as shown in Tab. 6 and Tab. 7.
Observations and analysis. We can observe that (1) the robust accuracy will be degraded more by larger perturbation strength indicated by ϵ under both L2 and L∞ constraints, and (2) more importantly, DeiTs are more robust than ResNets under small ϵ, and gradually become more vulnerable than ResNets as ϵ increases. For example, as gradually increasing ϵ from 8/255 to 128/255 under L∞ attacks, DeiT-S switches from the winner to the loser in robustness as compared to ResNet-50.
Insights. This set of experiments, together with the analysis in Sec. 4.5, reflects that the perturbation density and the perturbation strength are two key determinators for the robustness ranking between ViTs and CNNs: higher/lower perturbation density or perturbation strength will make ViTs the loser/winner in robustness. This first-time finding can enhance the understanding about the robustness ranking between ViTs and CNNs, and aid the decision making about which models to deploy in different real-world scenarios with high security awareness.
We also benchmark the effectiveness of Patch-Fool on top of adversarial trained ViTs/CNNs, evaluate the patch-wise adversarial transferability of Patch-Fool, and visualize the adversarial examples generated by our Patch-Fool’s different variants in Appendix. A.2∼ A.4, respectively.
5 CONCLUSION
The recent breakthroughs achieved by ViTs in various vision tasks have attracted an increasing attention on ViTs’ robustness, aiming to fulfill the goal of deploying ViTs into real-world vision applications. In this work, we provide a new perspective regarding ViTs’ robustness and propose a novel attack framework, dubbed Patch-Fool, to attack the basic component (i.e., a single patch) in ViTs’ self-attention calculations, against which ViTs are found to be more vulnerable than CNNs. Interestingly, the proposed Sparse Patch-Fool and Mild Patch-Fool attacks, two variants of our Patch-Fool, further indicate that the perturbation density and perturbation strength onto each patch seem to be the two key factors that determine the robustness ranking between ViTs and CNNs. We believe this work can shed light on better understanding ViTs’ robustness and inspire innovative defense techniques.
ACKNOWLEDGEMENT
The work is supported by the National Science Foundation (NSF) through the MLWiNS program (Award number: 2003137), an NSF CAREER award (Award number: 2048183), and the RTML program (Award number: 1937592).
A APPENDIX
A.1 EVALUATING ROBUSTNESS OF VITS AND CNNS UNDER EXISTING ATTACKS
Although various comparisons on the robustness of ViTs and CNNs have been explored in pioneering works (Bhojanapalli et al., 2021; Aldahdooh et al., 2021; Shao et al., 2021), their evaluation suffers from one of the following limitations: (1) only adopt weak attack methods, (2) only adopt early ViT designs without considering recently advanced ViT architectures, and (3) do not adopt the official and latest pretrained models and suffer from inferior clean accuracies. To this end, we extensively evaluate the robustness against common white-box attacks of several representative ViT variants, which cover the popular trends in designing ViT architectures, including (1) using local self-attention (Swin (Liu et al., 2021a)), which adopts the attention mechanism within a local region instead of the global ones in vanilla ViTs to capture low-level features and reduce the computational cost, and (2) introducing the inductive bias of CNNs to build hybrid models (LeViT (Graham et al., 2021)).
A.1.1 EVALUATION SETUP Models and datasets. We evaluate the robustness of three ViT families (i.e., DeiT (Touvron et al., 2021), Swin (Liu et al., 2021a), and LeViT (Graham et al., 2021)) and two CNN families (ResNet (He et al., 2016) and VGG (Simonyan & Zisserman, 2014)) on ImageNet using their official implementation and pretrained models. Note that we adopt DeiT models without distillation, which only improves the training schedule over vanilla ViTs, for a fair comparison.
Attack settings. We adopt four adversarial attacks (i.e., PGD (Madry et al., 2017), AutoAttack (Croce & Hein, 2020), CW-L∞ (Carlini & Wagner, 2017), and CW-L2) with different perturbation strengths. In particular, for the CW-L∞ and CW-L2 attacks, we adopt the implementation in AdverTorch (Ding et al., 2019) and the same settings as (Chen et al., 2021a; Rony et al., 2019); For AutoAttack, we adopt the official implementation and default settings in (Croce & Hein, 2020).
A.1.2 OBSERVATIONS AND ANALYSIS Observations. From the evaluation results summarized in Tab. 8, we make the following observations: (1) ViTs are consistently more robust than CNNs with comparable model complexities under all attack methods, which is consistent with the previous observations (Bhojanapalli et al., 2021; Aldahdooh et al., 2021; Shao et al., 2021). In particular, DeiT-S/DeiT-B achieves a 18.70%/10.11% higher robust accuracy over ResNet-50/ResNet-152 under PGD-20 attacks with a perturbation strength of 0.001; (2) compared with vanilla ViTs, ViT variants equipped with local self-attention or convolutional modules, which improves the model capability to capture local features and thus boosts the clean accuracy, are more vulnerable to adversarial attacks, although they are still more robust than CNNs with comparable complexities. For example, Swin-T/Swin-B suffers from a 14.90%/5.04% robust accuracy drop compared with DeiT-S/DeiT-B under PGD-20 attacks with a perturbation strength of 0.001; and (3) the degree of overparameterization has less influence in the robustness for the same
family of ViT models compared with its great influence in CNNs’ robustness, as the most lightweight DeiT-Ti can already achieve a comparable robust accuracy (-0.58%) as ResNet-152, while requiring 9.17×/46× less floating-point operations (FLOPs)/parameters. Analysis. Combining the three insights drawn from the aforementioned observations, we can observe the superiority of the global attention mechanism over convolutional and local self-attention blocks, in terms of both improved robust accuracy and reduced sensitivity to the degree of model overparameterization. This indicates that the global attention mechanism itself can serve as a good robustification technique against existing adversarial attacks, even in lightweight ViTs with small model complexities. For example, as shown in Fig. 1, the gap between the attention maps generated by clean and adversarial inputs in deeper layers remains small. We wonder that "Are the global attentions in ViTs truly robust, or their vulnerability has not been fully explored and exploited?". To answer this, we propose our customized attack Patch-Fool in Sec. 3 and find that the vulnerability of global attentions can be utilized to degrade the robustness of ViTs, making them more vulnerable learners than CNNs.
A.2 PATCH-FOOL ON TOP OF ADVERSARIALLY TRAINED MODELS
To study the influence of robust training algorithms against our Patch-Fool, we further benchmark the robustness of both adversarially trained ViTs and CNNs.
Setup. We apply Fast Adversarial Training (FAT) (Wong et al., 2019) with an ϵ of 2/255 and 4/255 under the L∞ constraint on top of both DeiT-Ti and ResNet-18 on ImageNet. We report the robust accuracy of the FAT trained models against our Patch-Fool in Tabs. 9 and 10.
Observations and analysis. From Tab. 9, we can observe that although FAT improves the robustness of both DeiT-Ti and ResNet-18 against our Patch-Fool attacks, DeiT-Ti is still more vulnerable against Patch-Fool than ResNet-18 under the same number of perturbed patches. In addition, we can observe from Tab. 10 that (1) stronger adversarial training with larger ϵ leads to better robustness against both PGD attacks and our Patch-Fool, and (2) the improvement in robust accuracy against PGD attacks is higher than the one against Patch-Fool, indicating that enhanced adversarial training schemes or other defense methods are required to robustify ViTs against our Patch-Fool, which is also our future work.
A.3 PATCH-WISE ADVERSARIAL TRANSFERABILITY OF PATCH-FOOL
We further discuss the patch-wise adversarial transferability of Patch-Fool, i.e., transfer the perturbations generated for attacking one specific patch to attack other patches on the same image.
Setup. Without losing generality, we generate the adversarial perturbation for the center patch with Patch-Fool which is adopted to attack all other patches on the same image and the resulting robust accuracy is annotated in Fig. 3. We average the robust accuracy at each patch location over a batch of 128 images.
Observations. We can observe that the adversarial patches generated by Patch-Fool can be transferred to neighboring patches with more notable accuracy degradation, while the adversarial transferability between patches far away from each other is poor.
A.4 VISUALIZING THE ADVERSARIAL EXAMPLES GENERATED BY PATCH-FOOL’S VARIANTS
Here we visualize the adversarial examples generated by Patch-Fool’s variants in Fig. 4, including (1) Patch-Fool with different number of perturbed patches (rows 2∼3), (2) Sparse Patch-Fool with a total of 250 perturbed pixels distributed in different number of perturbed patches (rows 4∼6), and (3) Mild Patch-Fool under L2 and L∞ constraints (rows 7∼8). The corresponding robust accuracy is also annotated.
Observations. From the aforementioned visualization in Fig. 4, we can observe that (1) the adversarial patches generated by Patch-Fool visually resemble and emulate natural corruptions in a small region of the original image caused by potential defects of the sensors or potential noises/damages of the optical devices (see row 2), (2) more perturbed patches lead to a lower robust accuracy and worse imperceptibility (see row 3), (3) the generated adversarial perturbations of our Sparse Patch-Fool resemble impulse noises, which improves imperceptibility while still notably degrading the robust accuracy especially when perturbing more patches (see rows 4∼6), and (4) adding L2 and L∞ constraints will notably improve the imperceptibility while incurring less degradation in the robust accuracy (rows 7∼8). | 1. What is the novel approach proposed by the paper in the field of adversarial attacks on Vision Transformers?
2. What are the strengths and weaknesses of the proposed attack method, particularly regarding its effectiveness and limitations?
3. How does the reviewer assess the clarity and quality of the paper's content, including the presentation of results and ablation studies?
4. What are some concerns or questions raised by the reviewer regarding the paper's methodology and conclusions?
5. Are there any suggestions or recommendations provided by the reviewer for improving the paper or its contributions? | Summary Of The Paper
Review | Summary Of The Paper
This paper propose a new attack on Vision Transformers (ViTs) called Patch-Fool. The attack proceeds by first picking a patch which contributes the most (in the self attention calculation) to other patches and then perturbed it adversarially wrt cross entropy + attention based loss. The results show that this kind of attack degrades performance significantly wrt prior work. The authors then perform various ablation studies to justify their architecture/loss choices.
Review
To my knowledge, the approach is novel and performance gains (degradation in this case) non-trivial. Overall, the paper is fairly easy to understand and the evaluation is fair.
Here are some of my thoughts/criticisms:
My main concern is wrt to the lack of example of images of adversarial examples. From whaat I have seen, there is only one image in Fig 1 which shows the output of the technique proposed. The authors later go on to show that increasing the number of patches seems to make the attack more effective. However, at the point, is the image even adversarial in the traditional definition of the word? (i.e. change in image is imperceptible to human eyes). Patch-Fool does not have an \epsilon factor like traditional robust training seems to have, so, essentially, we can perturb the image by a large amount to make the attack more potent and potentially make large changes to the image. Maybe the authors could also quantify how far these adversarial examples are via L2 distance.
Related to the previous point, if we perturb all patches in the image, accuracy drops drastically for both CNNs and ViTs. So, can this be considered a procedure to generate adversarial examples for both architectures simultaneously? The authors partially answer this in the sparse case in Table 7 but I was curious about the full perturbation case. Again, without an example image to look at, it is hard to say whether all patches being changed is even an adversarial example in aforementioned sense.
What are the numbers in table 3? I realized its accuracy on a later reading but the authors should mention this in the caption
A minor point about terminology - I have seen “Robust accuracy” used in the literature to mean accuracy of a robust model (i.e. model trained via robust training) whereas here, I assume, the authors used a model trained on clean data and fed it adversarial images. This seems to be implied in the paper but would be useful to mention to avoid confusion.
Formatting/Typos:
Section 4.3 - “according a predefined value l” should be “according to a predefined value l”
The title for section 5.2 is the last line of the page. I understand the authors are trying to adhere to page limits but it breaks the “flow” readability-wise. I would encourage the authors to correct this. |
ICLR | Title
Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations?
Abstract
Vision transformers (ViTs) have recently set off a new wave in neural architecture design thanks to their record-breaking performance in various vision tasks. In parallel, to fulfill the goal of deploying ViTs into real-world vision applications, their robustness against potential malicious attacks has gained increasing attention. In particular, recent works show that ViTs are more robust against adversarial attacks as compared with convolutional neural networks (CNNs), and conjecture that this is because ViTs focus more on capturing global interactions among different input/feature patches, leading to their improved robustness to local perturbations imposed by adversarial attacks. In this work, we ask an intriguing question: “Under what kinds of perturbations do ViTs become more vulnerable learners compared to CNNs?” Driven by this question, we first conduct a comprehensive experiment regarding the robustness of both ViTs and CNNs under various existing adversarial attacks to understand the underlying reason favoring their robustness. Based on the drawn insights, we then propose a dedicated attack framework, dubbed Patch-Fool, that fools the self-attention mechanism by attacking its basic component (i.e., a single patch) with a series of attention-aware optimization techniques. Interestingly, our Patch-Fool framework shows for the first time that ViTs are not necessarily more robust than CNNs against adversarial perturbations. In particular, we find that ViTs are more vulnerable learners compared with CNNs against our Patch-Fool attack which is consistent across extensive experiments, and the observations from Sparse/Mild Patch-Fool, two variants of Patch-Fool, indicate an intriguing insight that the perturbation density and strength on each patch seem to be the key factors that influence the robustness ranking between ViTs and CNNs. It can be expected that our Patch-Fool framework will shed light on both future architecture designs and training schemes for robustifying ViTs towards their real-world deployment. Our codes are available at https://github.com/RICE-EIC/Patch-Fool.
1 INTRODUCTION
The recent performance breakthroughs achieved by vision transformers (ViTs) (Dosovitskiy et al., 2020) have fueled an increasing enthusiasm towards designing new ViT architectures for different vision tasks, including object detection (Carion et al., 2020; Beal et al., 2020), semantic segmentation (Strudel et al., 2021; Zheng et al., 2021; Wang et al., 2021), and video recognition (Arnab et al., 2021; Liu et al., 2021b; Li et al., 2021b; Fan et al., 2021). To fulfill the goal of deploying ViTs into real-world vision applications, the security concern of ViTs is of great importance and challenge, especially in the context of adversarial attacks (Goodfellow et al., 2014), under which an imperceptible perturbation onto the inputs can mislead the models to malfunction.
In response, the robustness of ViTs against adversarial attacks has attracted increasing attention. For example, recent works (Bhojanapalli et al., 2021; Aldahdooh et al., 2021; Shao et al., 2021) find that in addition to ViTs’ decent task performances, they are more robust to adversarial attacks compared with convolutional neural networks (CNNs) under comparable model complexities. In particular, (Shao et al., 2021) claims that ViTs focus more on capturing the global interaction among
∗Equal contribution.
input/feature patches via its self-attention mechanism and the learned features contain less low-level information, leading to superior robustness to the local perturbations introduced by adversarial attacks. A natural response to this seemingly good news would be determining whether ViTs are truly robust against all kinds of adversarial perturbations or if their current win in robustness is an inevitable result of biased evaluations using existing attack methods that are mostly dedicated to CNNs. To unveil the potential vulnerability of ViTs, this work takes the first step in asking an intriguing question: “Under what kinds of perturbations do ViTs become more vulnerable learners compared to CNNs?”, and makes the following contributions:
• We propose a new attack framework, dubbed Patch-Fool, aiming to fool the self-attention mechanism by attacking the basic component (i.e., a single patch) participating in ViTs’ self-attention calculations. Our Patch-Fool attack features a novel objective formulation, which is then solved by Patch-Fool’s integrated attention-aware patch selection technique and attention-aware loss design;
• We evaluate the robustness of both ViTs and CNNs against our Patch-Fool attack with extensive experiments and find that ViTs are consistently less robust than CNNs across various attack settings, indicating that ViTs are not always robust learners and their seeming robustness against existing attacks can be overturned under dedicated adversarial attacks;
• We further benchmark the robustness of both ViTs and CNNs under two variants of PatchFool, i.e., Sparse Patch-Fool and Mild Patch-Fool, and discover that the perturbation density, defined as the number of perturbed pixels per patch, and the perturbation strength highly influence the robustness ranking between ViTs and CNNs, where our Patch-Fool is an extreme case of high perturbation density and strength.
We believe our work has opened up a new perspective for exploring ViTs’ vulnerability and understanding the different behaviors of CNNs and ViTs under adversarial attacks, and can provide insights to both future architecture designs and training schemes for robustifying ViTs towards their real-world deployment.
2 RELATED WORKS
Vision transformers. Motivated by the great success of Transformers in the natural language processing (NLP) field (Vaswani et al., 2017), ViTs have been developed by splitting an input image into a series of image patches and adopting self-attention modules for encoding the image (Dosovitskiy et al., 2020), and been shown to achieve competitive or superior performance over CNNs via dedicated data augmentation (Touvron et al., 2021) or self-attention structures (Yang et al., 2021; Graham et al., 2021; Liu et al., 2021a). As such, there has been tremendously increased attention on applying ViTs to various computer vision applications, such as self-supervised learning (Caron et al., 2021; Chen et al., 2021b; Xie et al., 2021; Li et al., 2021a), object detection (Carion et al., 2020; Beal et al., 2020), and semantic segmentation (Strudel et al., 2021; Zheng et al., 2021; Wang et al., 2021). The achievable performance of ViTs are continuously refreshed by emerging ViT variants, which provide new arts for designing ViT architectures. For example, convolutional modules have been incorporated into ViTs for capturing low-level features (Xiao et al., 2021; Wu et al., 2021; Graham et al., 2021; Peng et al., 2021), and replacing the global self-attention mechanism with local self-attention modules (Liu et al., 2021a; Dong et al., 2021; Liang et al., 2021; Liu et al., 2021b; Chu et al., 2021) has further pushed forward ViTs’ achievable accuracy-efficiency trade-off. Motivated by the growing interest in deploying ViTs into real-world applications, this work aims to better understand the robustness of ViTs and to develop adversarial attacks dedicated to ViTs.
Adversarial attack and defense. Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks (Goodfellow et al., 2014), i.e., imperceptible perturbations onto the inputs can mislead DNNs to make wrong predictions. As adversaries, stronger attacks are continuously developed, including both white-box (Madry et al., 2017; Croce & Hein, 2020; Carlini & Wagner, 2017; Papernot et al., 2016; Moosavi-Dezfooli et al., 2016) and black-box ones (Chen et al., 2017; Ilyas et al., 2018b; Andriushchenko et al., 2020; Guo et al., 2019; Ilyas et al., 2018a), which aggressively degrade the performances of the target DNN models. In particular, (Brown et al., 2017; Liu et al., 2020) build universal adversarial patches that are able to attack different scenes and (Liu et al., 2018; Zhao et al., 2020; Hoory et al., 2020) adopt adversarial patches to attack object detectors. However, these works focus on merely CNNs, questions regarding (1) whether patch-wise attacks
are effective for ViTs as compared to CNNs, and (2) how to efficiently construct strong patch-wise attacks utilizing the unique structures of ViTs are still under-explored yet interesting to be studied, especially considering patches are the basic elements for composing the inputs of ViTs. In response, various defense methods (Guo et al., 2017; Xie et al., 2017; Cohen et al., 2019; Metzen et al., 2017; Feinman et al., 2017; Fu et al., 2021a;b; Shafahi et al., 2019; Madry et al., 2017; Wong et al., 2019) have been proposed to improve DNNs’ robustness against those attacks. The readers are referred to (Akhtar & Mian, 2018; Chakraborty et al., 2018) for more attack and defense methods.
Robustness of vision transformers. Driven by the impressive performance recently achieved by ViTs in various vision tasks, their robustness has gained increasing attention. A consistent observation drawn by pioneering works that study ViTs’ robustness is that ViTs are more robust to adversarial attacks than CNNs since ViTs are more capable of capturing the global interactions among patches, while CNNs focus on local features and thus are more vulnerable to local adversarial perturbations. In particular, (Bhojanapalli et al., 2021) shows that ViT models pretrained with a sufficient amount of data are at least as robust as their ResNet counterparts on a broad range of perturbations, including natural corruptions, distribution shifts, and adversarial perturbations; (Aldahdooh et al., 2021) finds that vanilla ViTs or hybrid-ViTs are more robust than CNNs under Lp-based attacks; and (Shao et al., 2021) further explains that ViTs’ learned features contain less low-level information and are more generalizable, leading to their superior robustness, and introducing convolutional blocks that extract more low-level features will reduce the ViTs’ adversarial robustness. In addition, ViTs’ adversarial transferability has also been studied: (Mahmood et al., 2021) shows that adversarial examples do not readily transfer between CNNs and transformers and (Naseer et al., 2021; Wei et al., 2021) propose techniques to boost the adversarial transferability between ViTs and from ViTs to CNNs. In parallel, (Mao et al., 2021) refines ViTs’ architecture design to improve robustness. In our work, we challenge the common belief that ViTs are more robust than CNNs, which is concluded based on evaluations using existing attack methods, and propose to customize adaptive attacks utilizing ViTs’ captured patch-wise global interactions to make ViTs weaker learners.
3 THE PROPOSED PATCH-FOOL FRAMEWORK
In this section, we present our Patch-Fool attack method that perturbs a whole patch to fool ViTs and unveils a vulnerable perspective of ViTs.
3.1 PATCH-FOOL: VALIDATING AND RETHINKING THE ROBUSTNESS OF VITS
We extensively evaluate the robustness of several representative ViT variants against four state-ofthe-art adversarial attacks (i.e., PGD (Madry et al., 2017), AutoAttack (Croce & Hein, 2020), CWL∞ (Carlini & Wagner, 2017), and CW-L2) with different perturbation strengths in Appendix. A.1. We observe that (1) ViTs are consistently more robust than CNNs with comparable model complexities under all attack methods, which is consistent with the previous observations (Bhojanapalli et al., 2021; Aldahdooh et al., 2021; Shao et al., 2021), and (2) ViT variants equipped with local self-attention ((Swin (Liu et al., 2021a))) or convolutional modules (LeViT (Graham et al., 2021)), which improve the model capability in capturing local features and thus boosts the clean accuracy, are more vulnerable to adversarial attacks, although they are still more robust than CNNs with comparable complexities. This indicates that the global attention mechanism itself can serve as a good robustification technique against existing adversarial attacks, even in lightweight ViTs with small model complexities. For example, as shown in Fig. 1, the gap between the attention maps generated by clean and adversarial inputs in deeper layers remains small. We are curious about “Are the global attentions in ViTs truly robust, or their vulnerability has not been fully explored and exploited?”. To answer this, we propose our customized attack in the following sections.
3.2 PATCH-FOOL: MOTIVATION
Given the insensitivity of ViTs’ self-attention mechanism to local perturbations, we pay a close attention to the basic component (i.e., a single patch) participating in the self-attention calculation, and hypothesize that customized adversarial perturbations onto a patch can be more effective in fooling the captured patch-wise global interactions of self-attention modules than attacking the CNN modules. This is also inspired by the word substitution attacks (Alzantot et al., 2018; Ren et al., 2019; Jin et al., 2019; Zang et al., 2019) to Transformers in NLP tasks, which replace a word with its synonyms, and here an image patch in ViTs serves a similar role as a word.
3.3 PATCH-FOOL: SETUP AND OBJECTIVE FORMULATION
Attack setup. In our proposed Patch-Fool Attack, we do not limit the perturbation strength onto each pixel and, instead, constrain all the perturbed pixels within one patch (or several patches), which can be viewed as a variant of sparse attacks (Dong et al., 2020; Modas et al., 2019; Croce & Hein, 2019). Such attack strategies will lead to adversarial examples with a noisy patch as shown in Fig. 1, which visually resembles and emulates natural corruptions in a small region of the original image, e.g., one noisy patch only counts for 1/196 in the inputs of DeiT-S (Touvron et al., 2021), caused by potential defects of the sensors or potential noises/damages of the optical devices.
Objective formulation. Given the loss function J and a series of input image patches X = [x1, · · · ,xn]⊤ ∈ Rn×d with its associated label y, the objective of our adversarial algorithm can be formulated as:
argmax 1≤p≤n,E∈Rn×d
J(X+ 1p ⊙E, y) (1)
where E denotes the adversarial perturbation, 1p ∈ Rn such that 1p(i) = { 0, i ̸= p 1, i = p is a one hot vector, and ⊙ represents the penetrating face product such that a⊙B = [a ◦ b1, · · · ,a ◦ bd] where ◦ is the Hadamard product and bj is the j-th column of matrix B. For solving Eq. 1, our Patch-Fool needs to (1) select the adversarial patch p, and (2) optimize the corresponding E as elaborated in Sec. 3.4 and Sec. 3.5, respectively.
3.4 PATCH-FOOL: DETERMINE p VIA ATTENTION-AWARE PATCH SELECTION
Denoting a(l,h,i) = [a(l,h,i)1 , · · · , a (l,h,i) n ] ∈ Rn as the attention distribution for the i-th token of the h-th head in the l-th layer. For each layer l, we define:
s (l) j = ∑ h,i a (l,h,i) j (2)
which measures the importance of the j-th token in the l-th layer based on its contributions to other tokens in the self-attention calculation. For better fooling ViTs, we select the most influential patch p derived from argmax
j s (l) j according to a predefined value l. We fix l = 5 by default since the patches
at later self-attention layers are observed to be diverse from the input patches due to the increased information mixed from other patches, making them non-ideal for guiding the selection of input patches as justified in Sec. 4.3.
3.5 PATCH-FOOL: OPTIMIZE E VIA ATTENTION-AWARE LOSS
Given the selected adversarial patch index p from the above step, we define the attention-aware loss for the l-th layer as follows:
J (l) ATTN(X, p) = ∑ h,i a(l,h,i)p (3)
which is expected to be maximized so that the adversarial patch p, serving as the target adversarial patch, can attract more attention from other patches for more effectively fooling ViTs. The perturbation E is then updated based on both the final classification loss, i.e., the cross-entropy loss JCE, and a layer-wise attention-aware loss:
J(X̃, y, p) = JCE(X̃, y) + α ∑ l J (l) ATTN(X̃, p) (4)
where X̃ ≜ X+ 1p ⊙E and α is a weighted coefficient for controlling ∑ l J (l) ATTN(X̃, p). We further adopt PCGrad (Yu et al., 2020) to avoid the gradient conflict of two losses, and thus the update of perturbation E is calculated using the following equation
δE = ∇EJ(X̃, y, p)− α ∑ l βl∇EJCE(X̃, y) (5)
where
βl = 0,
〈 ∇EJCE(X̃, y),∇EJ (l)ATTN(X̃, p) 〉 > 0〈
∇EJCE(X̃,y),∇EJ(l)ATTN(X̃,p) 〉
∥∇EJCE(X̃,y)∥2 , otherwise
(6)
Following PGD (Madry et al., 2017), we iteratively update E using an Adam optimizer (Kingma & Ba, 2014): Et+1 = Et + η ·Adam(δEt) (7) where η is the step size for each update.
3.6 SPARSE PATCH-FOOL: A SPARSE VARIANT OF PATCH-FOOL
Motivation. One natural question associated with Patch-Fool is: “How many pixels within a patch are needed to be perturbed for effectively misleading the model to misclassify the input image?”. There exist two extreme cases: (1) perturbing only a few pixels that lead to local perturbations against which ViTs are more robust, and (2) perturbing the whole patch, i.e., our vanilla Patch-Fool. We hypothesize that answering this question helps better understand under what circumstances ViTs are more (or less) robust than CNNs. To this end, we study a variant of Patch-Fool, dubbed Sparse Patch-Fool, as defined below.
Objective formulation. For enabling Sparse Patch-Fool, we add a sparse constraint to Eq. 1, i.e.: argmax
1≤p≤n,E∈Rn×d,M∈{0,1}n×d J(X+ 1p ⊙ (M ◦E), y) s.t. ∥M∥0 ≤ k (8)
where we use a binary mask M with a predefined sparsity parameter k to control the sparsity of E. To effectively learn the binary distribution of M, we parameterize M as a continuous value M̂, following (Ramanujan et al., 2020; Diffenderfer & Kailkhura, 2021). During forward, only the top k highest elements of M̂ is activated and set to 1 and others are set to 0 to satisfy the target sparsity constraint; and during backward, all the elements in M̂ will be updated via straight-through estimation (Bengio et al., 2013). We jointly optimize M̂ with E as in Eq. 7.
3.7 MILD PATCH-FOOL: A MILD VARIANT OF PATCH-FOOL
In addition to the number of perturbed pixels manipulated by Sparse Patch-Fool, the perturbation strength is another dimension for measuring the perturbations within a patch. We also propose a mild variant of Patch-Fool, dubbed Mild Patch-Fool, with a constraint on the norm of the perturbation E to ensure ∥E∥2 ≤ ϵ or ∥E∥∞ ≤ ϵ which are known as the L2 and L∞ constraint, respectively. We achieve this by scaling (for the L2 constraint) or clipping (for the L∞ constraint) E after updating it.
4 EVALUATION OF PATCH-FOOL
4.1 EVALUATION SETUP
Models and datasets. We mainly benchmark the robustness of the DeiT (Touvron et al., 2021) family with the ResNet (He et al., 2016) family, using their official pretrained models. Note that we adopt DeiT models without distillation for a fair comparison. We randomly select 2500 images from the validation set of ImageNet for evaluating robustness, following (Bhojanapalli et al., 2021).
Patch-Fool settings. The weight coefficient α in Eq. 4 is set as 0.002. The step size η in Eq. 7 is initialized to be 0.2 and decayed by 0.95 every 10 iterations, and the number of total iterations is 250. For evaluating Patch-Fool with different perturbation strengths, we allow Patch-Fool to attack up to four patches based on the attention-aware patch selection in Sec. 3.4, i.e., the patches with top importance scores defined in Eq. 2 will be selected. Note that we report the robust accuracy instead of the attack success rate throughout this paper as our main focus is the robustness benchmark.
4.2 BENCHMARK THE ROBUSTNESS OF VITS AND CNNS AGAINST PATCH-FOOL
We adopt our Patch-Fool to attack ViTs and use the saliency map to guide the patch selection for attacking CNNs, which is the strongest attack setting as shown in Sec. 4.3. The resulting robust accuracy of both DeiT and ResNet families under different numbers of attacked patches is shown in Fig. 2. We can observe that DeiT models are consistently less robust against Patch-Fool than their ResNet counterparts under similar model complexity, e.g., compared with ResNet-50, DeiT-S suffers from a 16.31% robust accuracy drop under the single-patch attack of Patch-Fool, although it has a 3.38% and 18.70% higher clean and robustness accuracy against PGD-20 (ϵ = 0.001), respectively. This indicates that ViTs are not always robust learners as they may underperform under customized perturbations
as compared to CNNs and their seeming robustness against existing attacks can be overturned.
4.3 ABLATION STUDY: EFFECTIVENESS OF THE ATTENTION-AWARE PATCH SELECTION
To validate the effectiveness of our attention-aware patch selection method, we benchmark two variants of patch selection mechanism: (1) random patch selection, and (2) saliencymap-based patch selection. For the latter one, we adopt the averaged saliency score of a patch, defined as the averaged absolution value of the gradients on each pixel in a patch following (Simonyan et al., 2013), as the metric to select patches. For a fair comparison, we only adopt the final cross-entropy loss JCE in Eq. 4 in this set of experiments. As shown in Tab. 2, we can see that (1) among the three strategies for attacking ViTs, our attention-aware patch selection is the most effective strategy in most cases and thus we adopt it by default; (2) DeiT variants are still consistently less robust than their ResNet counterparts under similar model complexity, indi-
cating that attacking the basic component participating in self-attention calculations can indeed effectively degrade ViTs’ robustness; and (3) Patch-Fool equipped with random patch selection, with a 2.64% robust accuracy gap against the best strategy, can already effectively degrade DeiTs’
robustness, while it cannot effectively attack ResNets without the guidance from the saliency map, indicating ViTs are generally more vulnerable than CNNs to patch-wise perturbations.
We also perform an ablation study for l in Eq. 2 based on which layer the attention-aware patch selection is performed. As shown in Tab. 2, selecting the early layers generally achieves consistent better results than that of later layers, which we conjecture is because patches in early layers can still roughly maintain the original information extracted from the inputs while their
counterparts in later layers are mixed with information from other patches, leading to an inferior guidance for selecting the perturbed patch. This conjecture is validated by the observed phase change in the attention map, i.e., after the 6-th layer, more complexity correlations between patches are captured in addition to the diagonal ones. Therefore, we set l = 5 by default.
4.4 ABLATION STUDY: EFFECTIVENESS OF THE ATTENTION-AWARE LOSS
To evaluate the effectiveness of our attentionaware loss with the cosine-similarity-based reweighting mechanism (see Sec. 3.5), we compare it with two baselines: (1) train with only the final cross-entropy loss, i.e., JCE is enabled without the attention-aware loss, and (2) βl = 0, ∀l ∈ [1, 12], i.e., the layerwise J (l)ATTN in Eq. 4 is directly summed together with the final JCE. As shown in Tab. 3, we can observe that (1) our attention-aware loss equipped with the cosine-
similarity-based re-weighting strategy consistently achieves the best attack performance, e.g., a 3.17% reduction in robust accuracy compared with the baseline without attention-aware loss; and (2) directly summing up all the losses leads to poor convergence especially under limited perturbed patches.
4.5 BENCHMARK AGAINST SPARSE PATCH-FOOL
Setup. To study the influence of the sparsity of perturbed pixels for both CNNs and ViTs, we evaluate our proposed Sparse Patch-Fool via varying the global perturbation ratio (PR) of the whole image (i.e., k/total-pixel) as well as the number of patches allowed to be perturbed.
Benchmark the robustness of ViTs and CNNs. As shown in Tab. 4, under different perturbation ratios and numbers of perturbed patches, neither ViTs nor CNNs will always be the winner in robustness. In particular, under relatively small perturbation ratios or more perturbed patches (e.g., when all patches are allowed to be perturbed), CNNs will suffer from worse robustness, while ViTs will be more vulnerable learners under relatively large perturbation ratios as well as fewer perturbed patches.
Influence of the number of perturbed patches. We further study the influence of the number of perturbed patches under the same global perturbation ratio as shown in Tab. 5. We can see that (1) under a small perturbation ratio of 0.05%
which is closer to local perturbations, CNNs are the consistent loser in robustness; and (2) under a relatively large perturbation ratio of 0.5%, although increasing the number of perturbed patches leads to a consistent reduction in CNNs’ robust accuracy, the robustness reduction for ViTs will quickly saturate, i.e., ViTs gradually switch from the loser to the winner in robustness as compared to CNNs.
Insights. We analyze that smaller perturbation ratios under the same number of perturbed patches or more perturbed patches under the same perturbation ratio will lead to less perturbed pixels within one patch, i.e., a lower perturbation density, which is closer to local perturbations against which ViTs are thus more robust than CNNs. In contrast, given more perturbed pixels in one patch, i.e., a higher perturbation density for which an extreme case is our vanilla Patch-Fool, ViTs become more vulnerable learners than CNNs. This indicates that a high perturbation density can be a perspective for exploring ViTs’ vulnerability, which has been neglected by existing adversarial attacks.
Considering the perturbation strength is another dimension for measuring the perturbations within a patch in addition to the perturbation density, we evaluate our proposed Mild Patch-Fool in Sec. 4.6.
4.6 BENCHMARK AGAINST MILD PATCH-FOOL
Setup. To study the influence of the perturbation strength within each patch, we evaluate our proposed Mild Patch-Fool in Sec. 4.6 with L2 or L∞ constraints on the patch-wise perturbations with
different strengths indicated by ϵ. Note that the perturbation strength ϵ of L2-based Mild Patch-Fool is summarized over all perturbed patches. We benchmark both the DeiT and ResNet families with different numbers of perturbed patches as shown in Tab. 6 and Tab. 7.
Observations and analysis. We can observe that (1) the robust accuracy will be degraded more by larger perturbation strength indicated by ϵ under both L2 and L∞ constraints, and (2) more importantly, DeiTs are more robust than ResNets under small ϵ, and gradually become more vulnerable than ResNets as ϵ increases. For example, as gradually increasing ϵ from 8/255 to 128/255 under L∞ attacks, DeiT-S switches from the winner to the loser in robustness as compared to ResNet-50.
Insights. This set of experiments, together with the analysis in Sec. 4.5, reflects that the perturbation density and the perturbation strength are two key determinators for the robustness ranking between ViTs and CNNs: higher/lower perturbation density or perturbation strength will make ViTs the loser/winner in robustness. This first-time finding can enhance the understanding about the robustness ranking between ViTs and CNNs, and aid the decision making about which models to deploy in different real-world scenarios with high security awareness.
We also benchmark the effectiveness of Patch-Fool on top of adversarial trained ViTs/CNNs, evaluate the patch-wise adversarial transferability of Patch-Fool, and visualize the adversarial examples generated by our Patch-Fool’s different variants in Appendix. A.2∼ A.4, respectively.
5 CONCLUSION
The recent breakthroughs achieved by ViTs in various vision tasks have attracted an increasing attention on ViTs’ robustness, aiming to fulfill the goal of deploying ViTs into real-world vision applications. In this work, we provide a new perspective regarding ViTs’ robustness and propose a novel attack framework, dubbed Patch-Fool, to attack the basic component (i.e., a single patch) in ViTs’ self-attention calculations, against which ViTs are found to be more vulnerable than CNNs. Interestingly, the proposed Sparse Patch-Fool and Mild Patch-Fool attacks, two variants of our Patch-Fool, further indicate that the perturbation density and perturbation strength onto each patch seem to be the two key factors that determine the robustness ranking between ViTs and CNNs. We believe this work can shed light on better understanding ViTs’ robustness and inspire innovative defense techniques.
ACKNOWLEDGEMENT
The work is supported by the National Science Foundation (NSF) through the MLWiNS program (Award number: 2003137), an NSF CAREER award (Award number: 2048183), and the RTML program (Award number: 1937592).
A APPENDIX
A.1 EVALUATING ROBUSTNESS OF VITS AND CNNS UNDER EXISTING ATTACKS
Although various comparisons on the robustness of ViTs and CNNs have been explored in pioneering works (Bhojanapalli et al., 2021; Aldahdooh et al., 2021; Shao et al., 2021), their evaluation suffers from one of the following limitations: (1) only adopt weak attack methods, (2) only adopt early ViT designs without considering recently advanced ViT architectures, and (3) do not adopt the official and latest pretrained models and suffer from inferior clean accuracies. To this end, we extensively evaluate the robustness against common white-box attacks of several representative ViT variants, which cover the popular trends in designing ViT architectures, including (1) using local self-attention (Swin (Liu et al., 2021a)), which adopts the attention mechanism within a local region instead of the global ones in vanilla ViTs to capture low-level features and reduce the computational cost, and (2) introducing the inductive bias of CNNs to build hybrid models (LeViT (Graham et al., 2021)).
A.1.1 EVALUATION SETUP Models and datasets. We evaluate the robustness of three ViT families (i.e., DeiT (Touvron et al., 2021), Swin (Liu et al., 2021a), and LeViT (Graham et al., 2021)) and two CNN families (ResNet (He et al., 2016) and VGG (Simonyan & Zisserman, 2014)) on ImageNet using their official implementation and pretrained models. Note that we adopt DeiT models without distillation, which only improves the training schedule over vanilla ViTs, for a fair comparison.
Attack settings. We adopt four adversarial attacks (i.e., PGD (Madry et al., 2017), AutoAttack (Croce & Hein, 2020), CW-L∞ (Carlini & Wagner, 2017), and CW-L2) with different perturbation strengths. In particular, for the CW-L∞ and CW-L2 attacks, we adopt the implementation in AdverTorch (Ding et al., 2019) and the same settings as (Chen et al., 2021a; Rony et al., 2019); For AutoAttack, we adopt the official implementation and default settings in (Croce & Hein, 2020).
A.1.2 OBSERVATIONS AND ANALYSIS Observations. From the evaluation results summarized in Tab. 8, we make the following observations: (1) ViTs are consistently more robust than CNNs with comparable model complexities under all attack methods, which is consistent with the previous observations (Bhojanapalli et al., 2021; Aldahdooh et al., 2021; Shao et al., 2021). In particular, DeiT-S/DeiT-B achieves a 18.70%/10.11% higher robust accuracy over ResNet-50/ResNet-152 under PGD-20 attacks with a perturbation strength of 0.001; (2) compared with vanilla ViTs, ViT variants equipped with local self-attention or convolutional modules, which improves the model capability to capture local features and thus boosts the clean accuracy, are more vulnerable to adversarial attacks, although they are still more robust than CNNs with comparable complexities. For example, Swin-T/Swin-B suffers from a 14.90%/5.04% robust accuracy drop compared with DeiT-S/DeiT-B under PGD-20 attacks with a perturbation strength of 0.001; and (3) the degree of overparameterization has less influence in the robustness for the same
family of ViT models compared with its great influence in CNNs’ robustness, as the most lightweight DeiT-Ti can already achieve a comparable robust accuracy (-0.58%) as ResNet-152, while requiring 9.17×/46× less floating-point operations (FLOPs)/parameters. Analysis. Combining the three insights drawn from the aforementioned observations, we can observe the superiority of the global attention mechanism over convolutional and local self-attention blocks, in terms of both improved robust accuracy and reduced sensitivity to the degree of model overparameterization. This indicates that the global attention mechanism itself can serve as a good robustification technique against existing adversarial attacks, even in lightweight ViTs with small model complexities. For example, as shown in Fig. 1, the gap between the attention maps generated by clean and adversarial inputs in deeper layers remains small. We wonder that "Are the global attentions in ViTs truly robust, or their vulnerability has not been fully explored and exploited?". To answer this, we propose our customized attack Patch-Fool in Sec. 3 and find that the vulnerability of global attentions can be utilized to degrade the robustness of ViTs, making them more vulnerable learners than CNNs.
A.2 PATCH-FOOL ON TOP OF ADVERSARIALLY TRAINED MODELS
To study the influence of robust training algorithms against our Patch-Fool, we further benchmark the robustness of both adversarially trained ViTs and CNNs.
Setup. We apply Fast Adversarial Training (FAT) (Wong et al., 2019) with an ϵ of 2/255 and 4/255 under the L∞ constraint on top of both DeiT-Ti and ResNet-18 on ImageNet. We report the robust accuracy of the FAT trained models against our Patch-Fool in Tabs. 9 and 10.
Observations and analysis. From Tab. 9, we can observe that although FAT improves the robustness of both DeiT-Ti and ResNet-18 against our Patch-Fool attacks, DeiT-Ti is still more vulnerable against Patch-Fool than ResNet-18 under the same number of perturbed patches. In addition, we can observe from Tab. 10 that (1) stronger adversarial training with larger ϵ leads to better robustness against both PGD attacks and our Patch-Fool, and (2) the improvement in robust accuracy against PGD attacks is higher than the one against Patch-Fool, indicating that enhanced adversarial training schemes or other defense methods are required to robustify ViTs against our Patch-Fool, which is also our future work.
A.3 PATCH-WISE ADVERSARIAL TRANSFERABILITY OF PATCH-FOOL
We further discuss the patch-wise adversarial transferability of Patch-Fool, i.e., transfer the perturbations generated for attacking one specific patch to attack other patches on the same image.
Setup. Without losing generality, we generate the adversarial perturbation for the center patch with Patch-Fool which is adopted to attack all other patches on the same image and the resulting robust accuracy is annotated in Fig. 3. We average the robust accuracy at each patch location over a batch of 128 images.
Observations. We can observe that the adversarial patches generated by Patch-Fool can be transferred to neighboring patches with more notable accuracy degradation, while the adversarial transferability between patches far away from each other is poor.
A.4 VISUALIZING THE ADVERSARIAL EXAMPLES GENERATED BY PATCH-FOOL’S VARIANTS
Here we visualize the adversarial examples generated by Patch-Fool’s variants in Fig. 4, including (1) Patch-Fool with different number of perturbed patches (rows 2∼3), (2) Sparse Patch-Fool with a total of 250 perturbed pixels distributed in different number of perturbed patches (rows 4∼6), and (3) Mild Patch-Fool under L2 and L∞ constraints (rows 7∼8). The corresponding robust accuracy is also annotated.
Observations. From the aforementioned visualization in Fig. 4, we can observe that (1) the adversarial patches generated by Patch-Fool visually resemble and emulate natural corruptions in a small region of the original image caused by potential defects of the sensors or potential noises/damages of the optical devices (see row 2), (2) more perturbed patches lead to a lower robust accuracy and worse imperceptibility (see row 3), (3) the generated adversarial perturbations of our Sparse Patch-Fool resemble impulse noises, which improves imperceptibility while still notably degrading the robust accuracy especially when perturbing more patches (see rows 4∼6), and (4) adding L2 and L∞ constraints will notably improve the imperceptibility while incurring less degradation in the robust accuracy (rows 7∼8). | 1. What is the focus of the paper regarding the evaluation of ViT's robustness?
2. What are the strengths of the proposed Patch-Fool attack and attention-aware attack framework?
3. What are the weaknesses of the paper regarding its claims and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a Patch-Fool attack which fools the self-attention mechanism of ViTs to evaluate the robustness of ViTs and CNNs-based models. Prior works investigated the robustness of ViTs and CNNs under the adversarial attacks designed mainly for CNNs and conjectured that ViTs are more robust than CNNs. However, in this paper, the authors attack a specific single patch and develop an attention-aware attack framework against which they find that ViTs are weaker learners than CNNs. Moreover, by developing Spare Patch-Fool, the authors find that ViTs are more vulnerable than CNNs under high perturbation density.
Review
Strengths:
This paper provides a new perspective for evaluating the robustness of ViTs which is novel and insightful.
From my point of view, the conclusion that the perturbation density is the key factor that influences the robustness ranking between ViTs and CNNs is significant to this field and can provide a better understanding of ViTs’ robustness.
This paper is well-written and easy to follow. The experiments are extensive and solid enough.
Weaknesses:
I notice that the robustness benchmarks of this paper are under the non-robust models. Some recent works have also proposed adversarial training for ViTs such as [1]. Can the authors provide the results on adversarial trained ViTs and CNNs for better benchmarking the robustness of these models?
[1] Shao, Rulin, et al. "On the adversarial robustness of visual transformers." arXiv preprint arXiv:2103.15670 (2021). |
ICLR | Title
Variational oracle guiding for reinforcement learning
Abstract
How to make intelligent decisions is a central problem in machine learning and artificial intelligence. Despite recent successes of deep reinforcement learning (RL) in various decision making problems, an important but under-explored aspect is how to leverage oracle observation (the information that is invisible during online decision making, but is available during offline training) to facilitate learning. For example, human experts will look at the replay after a Poker game, in which they can check the opponents’ hands to improve their estimation of the opponents’ hands from the visible information during playing. In this work, we study such problems based on Bayesian theory and derive an objective to leverage oracle observation in RL using variational methods. Our key contribution is to propose a general learning framework referred to as variational latent oracle guiding (VLOG) for DRL. VLOG is featured with preferable properties such as its robust and promising performance and its versatility to incorporate with any value-based DRL algorithm. We empirically demonstrate the effectiveness of VLOG in online and offline RL domains with tasks ranging from video games to a challenging tilebased game Mahjong. Furthermore, we publish the Mahjong environment and an offline RL dataset as a benchmark to facilitate future research on oracle guiding1.
1 INTRODUCTION
Deep reinforcement learning (DRL) has undergone rapid development in recent years (Sutton & Barto, 2018; Mnih et al., 2015; Vinyals et al., 2019). However, there is a common and important but under-explored aspect in RL: imagine that after playing a Poker game, a human player may look at the replay to check opponents’ hands and analyze this information to improve his/her playing strategy (or policy) for the next time. We refer to the information like opponents’ hands as oracle observation, defined as the information invisible to the agent during online task execution but available during offline training. By contrast, the information available during task execution is called executor observation. Such a scenario has been referred to as oracle guiding for RL (Li et al., 2020; Fang et al., 2021) (see Sec. 3 for a formal definition). Oracle guiding is common in real life. For example, when taking an examination (the oracle observation is the answers to similar questions, which are available only during preparing); and training a robot to perform some tasks on the Moon (when training the robot, we can provide it with the information of the territory, which are not available during execution). The type of oracle observation can be diverse, including hindsight information (Harutyunyan et al., 2019; Guez et al., 2020), human feedback (Knox & Stone, 2009; Loftin et al., 2016; MacGlashan et al., 2017), re-calibrated data with post-processing, and hidden states in a partially observed setting (Li et al., 2020).
While humans naturally perform oracle guiding when learning to make decisions, it remains challenging in RL. The difficulties include: (1) how to guarantee that learning with oracle observation
∗Work done during an internship in Microsoft Research Asia. Email: dongqi.han@oist.jp 1https://github.com/Agony5757/mahjong
improves the main decision model using executor observation only, and (2) if introducing an auxiliary loss leveraging oracle observation, how to tradeoff between the main loss and auxiliary loss. While recent studies attempted to model oracle guiding in RL (Guez et al., 2020; Li et al., 2020; Fang et al., 2021), none of them addressed these difficulties (refer to the Related Work section for more details). In particular, all these proposed methods are heuristic: although empirical results showed performance gain with oracle guiding, it is not theoretically guaranteed that the usage of oracle observation improves execution performance.
In this paper, we propose a fundamentally new idea for oracle guiding based on Bayesian theory. Taking Poker as an example, we know that the learning of optimal strategy is tractable if knowing the global, true state of the environment (or simply state2), including all visible or invisible cards, the opponents’ playing style, etc. (Azar et al., 2017; Jin et al., 2018; 2020). A key part of skill improvement is learning to estimate the probabilistic distribution of environmental state from executor observation. The common way to do this by human experts is to watch match replays where the oracle observation (e.g. opponents’ hands) is available, and then use the oracle-estimated state to correct the executor-estimated state. We interpret this to Bayesian language: executor-estimated state as the prior distribution, and the oracle-estimated one as the posterior distribution. Thus, the training objective can be considered two-fold: learning to make decisions based on posterior estimation of state, and learning a prior distribution of state closer to the posterior one.
We formulate this idea by proposing a novel learning framework for general oracle guiding problems based on variational Bayes (VB) (Kingma & Welling, 2014), referred to as variational latent oracle guiding (VLOG). VLOG owns several preferable properties. First, VLOG is theoretically guaranteed to leverage oracle observation for improving the decision model using executor observation. Second, VLOG is a versatile DRL framework that can be integrated into any value-based RL algorithm and is agnostic to the type of oracle observation. Third, VLOG does not suffer from the necessity of tuning additional hyper-parameters. Finally, we empirically show that VLOG contributes to better performance in a variety of decision-making tasks in both online and offline RL domains. The tasks include a simple maze navigation, video games playing, and a particularly challenging tile-based game Mahjong in which humans heavily leverage oracle observation in learning (Li et al., 2020). We also contribute to the community by taking Mahjong as a benchmarking task for oracle guiding and publishing the RL environment and dataset to facilitate future research.
2 RELATED WORK
In the past few years, research interests have grown on DRL and imitating learning (Chen et al., 2020) with leveraging oracle or hindsight information. For DRL, Guez et al. (2020); Fang et al. (2021) considered hindsight observation (executor observation at future steps) as the oracle observation during training. Guez et al. (2020) used hindsight observation to facilitate learning a representation of current state. Another method (Fang et al., 2021) was used for stock trading: the authors trained a teacher (oracle) policy with hindsight information, and employed network distillation to make the student policy behaves more similar to the teacher policy. Both the methods (Guez et al., 2020; Fang et al., 2021) are heuristic and focused on making use of future observation for a better sequential modeling, while VLOG is theoretically guaranteed for any kind of oracle observation. For applications in imperfect-information games, Suphx (Li et al., 2020), a DRL-based AI for Mahjong, also introduced a method to leverage oracle observation (opponents’ hand) for stronger performance. They concatenate oracle observation with executor observation as the input of policy network, where the oracle observation is timed by a scalar variable which is annealed from 1 to 0 during the training course. However, the method used in Li et al. (2020) is also heuristic and has only been tested in one task.
Variational Bayes (VB) is a well-established method and has been taken advantage of in RL. For example, control as probabilistic inference uses VB to connect the objective function of RL and variational lower bound of a probabilistic inference problem (Furmston & Barber, 2010; Weber et al., 2015; Levine, 2018). Our idea differs since it frames a value regression problem by the maximumlikelihood problem, and then, it applies VB to solve it (see Sec. 4). Also, the usage of VBian network models for DRL has captured attention from researchers recently. For example, Ha & Schmidhuber (2018) proposed to employ a VAE to reduce the high dimensionality of image observation; Igl et al.
2Generally, oracle observation does not necessarily contain all the information of environmental state.
(2018); Han et al. (2020); Lee et al. (2020) proposed variational RNNs as the state-transition models to encode the belief states of the agent; Yin et al. (2021) utilized a variational sequential generative model for predicting future observation and used the prediction error to infer intrinsic reward to encourage exploration; and Okada et al. (2020) demonstrated performance gain by using a deep Bayesian planning model in continuous control tasks. Our study differs from the mentioned works by focusing on oracle guiding, and VLOG does not involve learning a state transition model.
3 ORACLE GUIDING POMDP
Here we define the problem based on the Partially Observable Markov decision process (POMDP) (Sondik, 1978). An oracle guiding POMDP distinguishes from the original POMDP by having two types of observations: executor and oracle. The executor observation x is always available to the agent, on which the agent’s decision making (execution) relies. The oracle observation x̂ is not accessible during execution, but can be obtained afterward. x is included in x̂ since the former is always available. Thus x̂ contains no less information of the underlying environment state than x.
A formal definition of oracle guiding POMDP is a tuple 〈S,A,P0, T ,X, X̂,O, Ô, γ〉, where S and A are the state and action spaces, respectively. P0 specifies the initial state distribution such that P0(s) is the probability of a state s ∈ S being an initial state. T specifies the state transition probability such that T (s′, r|s, a) is the probability of reaching to a new state s′ ∈ S with an immediate reward r ∈ R after taking an action a ∈ A at a state s ∈ S. X denotes the executor observation space and X̂ denotes the oracle observation space. O specifies the executor observation probability such that O(x|s) is the probability of an executor observation x ∈ X at a state s ∈ S. Similarly, Ô specifies the oracle observation probability such that Ô(x|s) is the probability of an oracle observation x̂ ∈ X̂ at a state s ∈ S. γ ∈ [0, 1) is the discount factor. Value functions are the expected value of return from a particular state or state-action pair. For a policy π, its Q-function is defined by qπ(s, a) := Eπ[ ∑∞ n=0 γ
nrt+n|st = s, at = a], where Eπ indicates the expectation when the policy π is followed. The state value function vπ is defined by vπ(s) := Eπ(a|s)[qπ(s, a)]. The agent aims to maximize EP0(s)vπ(s) with respect to π, and the value-functions play an important role to this end (Sutton & Barto, 2018).
4 VLOG: VARATIONAL LATENT ORACLE GUIDING
Let us introduce a latent vector zt, which is a probabilistic variable representing the environmental state st. From a Bayesian perspective, we consider the prior distribution p(zt|xt) as the agent’s estimated probability density function (PDF) for zt based on executor observation xt. Meanwhile, the posterior distribution PDF q(zt|x̂t) is modeled based on oracle observation x̂t. In RL, the most basic requirement is to make a good estimation of return by a value function approximator v(xt) := ∫ v(zt)p(zt|xt)dzt (we denote it by v, but it can also be a Q function by simply replacing xt with (xt,at)) based on available information, i.e. executor observation xt. The target of return, denoted by vtart , can be estimated by any value learning algorithm such as TD(0) (Sutton & Barto, 2018), Peng’s Q(λ) (Kozuno et al., 2021), etc. (generally, vtar can always be given by Bellman equations. However, one can also use Monte-Carlo return as vtar if available). In particular, we employed double Q-learning with dueling architecture (Wang et al., 2016) to compute vtar due to its effectiveness and simplicity (Sec. 5 and B.1). We want to maximize the log-likelihood objective of the estimation of return based on executor observation xt (i.e., for the executor model)
L := logP ( v(xt) = v tar t |xt ) = log ∫ zt P ( v(zt) = v tar t |zt ) p(zt|xt)dzt
= log ∫ zt q(zt|x̂t) q(zt|x̂t) P ( v(zt) = v tar t |zt ) p(zt|xt)dzt.
By Jensen’s inequality, we have L ≥ ∫ zt q(zt|x̂t) log P (v(zt) = vtart |zt) p(zt|xt) q(zt|x̂t) dzt
= ∫ zt [ q(zt|x̂t) logP ( v(zt) = v tar t |zt ) − q(zt|x̂t) log q(zt|x̂t) p(zt|xt) ] dzt
= Eq(zt|x̂t) [ logP ( v(zt) = v tar t |zt )]︸ ︷︷ ︸ oracle prediction error −DKL (q (zt|x̂t)‖p (zt|xt))︸ ︷︷ ︸ regularization term := LVLOG. (1)
Thus we can maximize our initial objectiveL via maximizingLVLOG, which is also known as the variational lower bound (Kingma & Welling, 2014), but in our oracle-guiding scheme. Since p(zt|xt) and q(zt|x̂t) represents the PDF of the latent vector obtained from executor observation and oracle observation, respectively, the meanings of the two terms in LVLOG appear clear now: the first term, i.e., oracle prediction error, helps to improve value estimation from posterior latent state distribution (zt computed with oracle observation); and the second term, i.e., regularization term, helps to shape the prior representation of zt closer to the posterior one as latent oracle guiding. We would like to highlight that the VLOG objective is the lower bound of the objective for the prior executor model v(xt) (the estimation of return using executor observation xt) with the usage of oracle observation x̂t. This lower bound guarantees that the usage of oracle observation facilitates the learning of the executor model, which is our original motivation. Remark 1. One may use any shape of the approximate posterior q, depending on which different instance of VLOG is possible. Furthermore, one may directly use v(xt) instead of p(zt|xt). These design choices allow users to incorporate any prior knowledge on oracle observation. For example, if one knows the range of a state-value at xt is a closed interval [l, u], the approximate posterior q(vt|x̂t) can be restricted to a family of probability distributions supported on [l, u].
4.1 IMPLEMENTATION WITH NEURAL NETWORKS
Inspired by the implementation of variational auto-encoder (VAE, Kingma & Welling (2014)), we propose the neural network architecture of VLOG (Fig. 1). The executor observation xt and oracle observation x̂t are processed by two distinct encoder networks to compute the prior and posterior distribution of the latent vector, respectively. During training, both xt and x̂t are available, and all the network parameters are updated by maximizing the VLOG objective in an end-to-end manner (Fig. 1A). During execution, the agent computes prior distribution p(z|xt) for decision making (Fig. 1B) without using oracle observation. zt is computed by parameterized normal distribution
p(zt|xt) = N (µpt , exp (logσ p t )), (µ p t , logσ p t ) = prior encoder(xt), q(zt|x̂t) = N (µqt , exp (logσ q t )), (µ q t , logσ q t ) = posterior encoder(x̂t).
For computing P(v(zt) = vtart |zt) in Eq. 1, we simply assume it follows normal distribution, and estimate it with mean square error between v(zt) and vtart in practice. The reparameterization trick is used to perform end-to-end training as in VAE (Kingma & Welling, 2014). Then the output of the decoder (value function) can be obtained by v(zt) = decoder(zt). Note that zt is obtained using the posterior encoder during training, and using the prior encoder during execution (Fig. 1A, B).
4.2 TASK-AGNOSTIC LATENT BOTTLENECK CONTROL
To learn a better representation, we borrow the idea from β-VAE (Higgins et al., 2016) to multiply a coefficient β to the regularization term. Thus we have the loss function (negative lower bound) as
J βVLOG = −Eq(zt|x̂t) [ logP ( v(zt) = v tar t |zt )] + βDKL (q (zt|x̂t)‖p (zt|xt)) . (2)
The hyper-parameter β controls the capacity of the latent information bottleneck (Tishby & Zaslavsky, 2015; Alemi et al., 2017). We found the choice of β is important for the performance of VLOG in RL (see Appendix B.3). However, having extra hyper-parameters are not desired. Inspired by the method used in Burgess et al. (2017) for controlling the scale of KL divergence in β-VAE, we propose a task-agnostic method to automatically adjust β by setting a target of KL divergence DtarKL. In particular, we minimize the auxiliary loss function (β as the being optimized parameter)
Jβ = (log10DtarKL − log10DKL (q (zt|x̂t)‖p (zt|xt))) log(β). (3)
The intuition here is to strengthen the regularization by increasing β when the divergence between prior and posterior is too large, and vice versa. This method is similar to that used in soft actor-critic for automatically adjusting the entropy coefficient (Haarnoja et al., 2019), while we used this for KL divergence coefficient. Importantly, we found a well-performing value DtarKL = 50 agnostic to other design choices. It worked well in a range of different tasks and networks (Sec. 5). Therefore, we do not need to tune β. We provide more discussion about this method in Appendix D.
5 EXPERIMENTS
How does VLOG perform in practice? We investigated the empirical performance of VLOG in three types of tasks using online or offline RL, from simple to difficult. In the following experiments, we used double DQN with dueling network architecture (van Hasselt et al., 2016; Wang et al., 2016) as the base RL algorithm, the model and loss functions for RL are defined in Appendix. B.1. As DRL is susceptible to the choice of hyper-parameters, introducing any new hyper-parameters might obscure the effect of oracle guiding. Double DQN and dueling architecture are preferable for the base algorithm since they require no additional hyper-parameters, in contrast to other DQN variants (Hessel et al., 2018), such as prioritized experience replay (Schaul et al., 2016), noisy network (Fortunato et al., 2018), categorical DQN (Bellemare et al., 2017), and distributed RL (Kapturowski et al., 2018). Importantly, we used the same hyper-parameter setting for all methods and environments as much as possible (see Appendix B.2).
5.1 MAZE
We first demonstrate how VLOG helps to shape the latent representation by leveraging oracle observation in learning. The testbed is a maze navigation task3 (Fig. 2A) with 10×10 grids. The executor observation is the (x, y) position, where x, y are continuous values randomly sampled within each grid of the maze (thus the observations in two adjacent but wall-separated grids may be very close). At each step, the agent selects an action (going up, down, right, or left) and moves to another grid if not blocked by wall. We provided the VLOG agent with oracle observation (xc, yc, dg) during training, where xc, yc are coordinates of the center of the current grid, and dg is the (shortest) path distance to goal from the current grid. It is intuitive that although the raw observation is (x, y), dg matters more in a maze navigation task. We empirically investigated how much such oracle observation used in learning of VLOG could help to shape the latent representation of z w.r.t. dg rather than position. The encoder and decoder were both 2-layers multi-layer perceptrons (MLP) with width 256 and ReLU activation. The size of latent vector zt for VLOG was 128 since we computed both µ and σ (Appendix C).
Experiments show that the baseline agent struggled to reach the goal (Fig. 2B), while VLOG agents stably solved the task after learning. To check how the usage of VLOG affected the learned latent representation, we visualize the latent representation of both VLOG and baseline models with principal component analysis (PCA, Pearson F.R.S. (1901)). In Fig. 2C, we map path distance to goal dg to color and plot the scores of the first 2 PCs of z (computed using executor observation)
3https://github.com/MattChanTK/gym-maze
for VLOG and the corresponding latent state for baseline using successful trials (“latent layer” in Fig. 1C). The latent state of VLOG showed a relatively smoother and more interpretable representation of distance-to-goal comparing to that of baseline. We then plot the latent representations for different positions in the maze in Fig. 2D. The latent state of VLOG more clearly represented dg , consistent with the result (Fig. 2C). In particular, we looked into a rectangle region (denoted by rectangles in Fig. 2D) inside which the left 2 grids and right 2 grids are segregated by a wall. We found the corresponding areas in the latent PC space and circled them in Fig. 2C. While these 4 grids are close in (x, y) (executor observation), their distances-to-goal (oracle observation) are highly distinguished. By leveraging oracle guiding using VLOG, the agents can clearly differentiate the left 2 grids and the right 2 grids in latent space as shown in Fig. 2C, D, left (note that the latent state z here of VLOG was computed using executor observation only). By contrast, the latent representations of these grids were overlapped for the baseline model, which did not utilize the oracle observation (Fig. 2C, D, right). In sum, we demonstrated with a toy example that VLOG effectively helped the latent space to couple with oracle state useful for the task. The following sections will transfer to experiments on more complicated tasks and discuss how VLOG can improve practical performance.
5.2 NOISY MINATAR
To evaluate how VLOG scales with more high-dimensional state space, we tested it on a set of MinAtar videos games. MinAtar (Young & Tian, 2019) is a test platform for AI agents, which implements 5 miniaturized Atari 2600 games with discrete actions (Seaquest, Breakout, Space Invaders, Freeway and Asterix). MinAtar is inspired by Arcade Learning Environment (Bellemare et al., 2013) but simplifies the environments for efficiency. The observation is 10×10 pixels with multiple channels indicating different objects. In real-world, the observation usually contains some noise. Thus it is natural to consider the noisy observation as the partially-observable executor observation, and the original, non-noisy observation as the oracle one. Suppose that at each frame, each pixel may “break” randomly with an independent probability of 1/8 (Fig. 3A). The original observation at a broken pixel is erased and is replaced by a different value in all channels. We consider such noisy MinAtar environments with the noisy pixels as the executor observation and the original pixels as the oracle observation.
The network structure was the same as that for Maze, but the encoder was replaced by a CNN (Appendix C). We ran experiments on all the 5 environments of MinAtar with VLOG as well as
baseline, oracle and alternative oracle guiding methods (see Appendix A for details). Baseline model always uses executor observation as network input (Fig. 1C). Oracle is the same as baseline except that it always receives oracle observation (i.e., cheating, we ran the experiments of oracle for reference). VLOG-no oracle is an ablation of VLOG where we use the executor observation as the input to posterior encoder in VLOG (oracle observation is not used). Suphx-style oracle guiding is the oracle guiding method used in the Mahjong AI Suphx (Li et al., 2020), in which the executor observation and the dropout-ed oracle observation (with dropout probability pdropout) are concatenated as the input to the network. With training proceeds, pdropout is gradually increased from 0 to 1, thus the trained network does not need oracle observation for input (Appendix A). OPD-style oracle guiding is the oracle guiding method used in oracle policy distillation (OPD) (Fang et al., 2021). OPD-style oracle guiding first trains a teacher model using oracle observation as input, and then trains the executor model using an auxiliary loss, which is the error between the executor and the teacher models’ estimation for value function (Appendix A).
The results show that oracle performed usually the best, as expected. We normalized the performance of non-oracle models using oracle model as reference, for more clear comparison (Fig. 3B). Among all the oracle guiding methods (VLOG, OPD-style and Suphx-style), VLOG consistently performed the best. It is notable that VLOG and VLOG-no oracle performed surprisingly well in Seaquest. This can be explained by that Seaquest is a task with a local optimum (see Appendix E), while the stochasticity in hidden states of VLOG helped exploration in latent space to escape from the local optimum (a similar idea is Fortunato et al. (2018), but their noise was added to the network weights). Except in Seaquest, VLOG-no oracle did not show significant performance different with baseline, showing that the performance gain of VLOG in this task set mainly came from leveraging oracle observation for shaping latent distribution; and the usage of variational Bayesian model was, at least not harming the performance when there is no helpful oracle information.
5.3 OFFLINE LEARNING ON MAHJONG
Mahjong is a popular tile-based game with hundreds of millions of players worldwide (here we consider the Japanese variant). The game is like many other card games (but using tiles instead of cards), in which multiple (usually four) players draw and discard tiles (totally 136 tiles) alternatively to satisfy winning conditions. It is a highly challenging game characterized with (1) imperfect information in executor observation (a player cannot see opponents’ private tiles and the remaining tiles to be drawn), (2) stochastic state transitions as in many card games, and (3) extremely high game complexity (i.e., the number of distinguished, legal game states). Complexity of Mahjong is much larger than 10166 (Appendix F.1). For reference, complexity of Go is ∼ 10172 (Silver et al., 2016) and complexity of no-limit Poker is ∼ 10162 (Johanson, 2013). In Mahjong, it is hard to make optimal decisions based on executor observation because the outcomes heavily depend on invisible information, and complexity of invisible state space is as high as 1048 on average (Li et al., 2020). In response to this challenge, Li et al. (2020) introduced suphx-style oracle guiding and demonstrated a performance gain. Thus, we consider Mahjong as a promising test platform for oracle guiding methods. Since the number of possible states in Mahjong is extremely large. It is costly to explore with random actions in an online RL manner, and no pre-
vious work could train a strong Mahjong AI with purely online RL. Also, we would like to examine the effectiveness of VLOG in offline RL settings. For these reasons, we transferred to offline RL (Levine et al., 2020) for the Mahjong task using expert demonstrations.
We processed about 23M steps of human experts’ plays from the online Mahjong game platform Tenhou (https://tenhou.net/mjlog.html) to a dataset for offline RL (data were augmented using the symmetry in Mahjong, see Appendix F). Also, we created a simulator of Mahjong as the testing environment. Though there are sophisticated ways to encodes the state and action space of Mahjong (Li et al., 2020), we attempts to make simplifications with reasonable amounts of approximations since our goal is to not create an strong Mahjong AI, but to use Mahjong as a platform to study oracle guiding problems. In our case, the action space is composed of 47 discrete actions covering all decisions in Mahjong. An executor observation is a matrix encoding public information and the current player’s private hand; An oracle observation concatenates the executor observation with the information of the opponents’ private hands. (see Appendix F). We used an 1-D CNN as the encoder as done in Mahjong AIs commonly (Li et al., 2020), and the size of zt and the decoder network width was increased to 512 and 1024, respectively (Appendix C).
Note that although Mahjong is a 4-player game, using offline RL data to train an agent does not involve multi-agent RL (Zhang et al., 2021) because the offline dataset is fixed: the opponents have fixed policies and thus can be considered as parts of the environment. Our experiments focused on single-agent RL to avoid the complexity caused by considering multi-agent RL. We investigated two kinds of offline RL settings. The first is conservative Q-learning (CQL) (Kumar et al., 2020). Our CQL setting differed from the online RL setting in previous sections by adding auxiliary CQL loss (Kumar et al., 2020) to the Q-learning loss function (Appendix B.1.2). The other is behavior cloning (BC). Although VLOG was designed for value-based RL, we could straightforwardly incorporate VLOG with BC by letting the network predict action instead Q function. The learning was conducted by minimizing the cross entropy between the output and the target action (demonstration) as in a classification problem. Note that we did not test OPD-style oracle guiding in BC setting because it was equal to the baseline since we can directly use demonstration actions as oracle policy for distillation.
Because Mahjong is a zero-sum, four-player game, we tested the performance of trained models in two scenarios: playing with the (trained) baseline model (Table 1) and playing with each other (fight each other, Table 2). In the first scenario, four agents were playing the same matches on the game table, where two of them were being tested agents and the other two were the baseline models. Although each agent played for itself and there was no communication between players, we simply added up the payoff of the two being tested agents, and consider they won a match if one of them ranked top (so the match win rate will be 50% if equally strong as baseline), for statistics (Table 1).
For CQL, the results (Table 1 left and 2 upper) show that VLOG substantially outperformed the baseline and alternative methods (because Mahjong is a highly random game, 55.7% match win rate indicates a large skill gap). Interestingly, VLOG was even comparable to oracle. This can be explained by that VLOG also benefited from its Bayesian property, which is consistent with that VLOG-no oracle showed a significant performance gain over the baseline model (Table 1 left). Still, the oracle model learned to reduce deal-ins (i.e., player discards a tile and another player wins the game by picking up this tile to compose a winning hand) since it could explicitly see the opponents’ private tiles, showing a much lower deal-in rate than other non-cheating models (Table 2 upper).
In BC setting, the agents did not learn a value function, but tried to predict human experts’ actions. Therefore, the training procedure did not involve reasoning the relationship between playing outcome and oracle observation, but just imitating human behaviors. This can be seen from the results that oracle did not substantially outperform baseline in BC (Table 1 right and 2 lower). However, VLOG and VLOG-no oracle still showed performance gain, thanks to the stochastic modeling.
6 SUMMARY
We have proposed VLOG – a variational Bayesian learning framework for leveraging oracle observation to facilitate DRL especially in partially observable environments. VLOG is available for any RL problem in which there is oracle observation that may help the executor to make decisions.
We first introduced a latent vector z to represent the environmental state. The prior and posterior distribution of z is modeled using executor and oracle observation, respectively. Then, we derived a variational lower bound (Eq. 2) by maximizing which we can optimize the executor model using the oracle observation. We developed the corresponding methodology for DRL, which can be incorporated with most RL algorithms that need to estimate a value function.
If oracle observation contains more information to retrieval the true environmental state (or, it is the true environmental state), VLOG’s oracle guiding in latent space helps to shape a latent representation in neural networks closer to the true one. We demonstrated this advantage of VLOG using the maze task. Then, we scaled VLOG up to solve image-based videos games, and compared it with alternative oracle-guiding methods. Though all oracle-guiding methods showed performance gain over the baseline model, VLOG performed consistently the best. Finally, we transferred to offline RL domain using a challenging tile-based game Mahjong in which an executor plays with hidden information and random state transitions, and observed VLOG achieved best overall performance.
We also conducted an ablation study of VLOG (VLOG-no oracle) in which posterior model did not receive oracle observation, but executor one. VLOG-no oracle demonstrated performance gain in the tasks that may benefit from the stochasticity; otherwise, it performs similar to the deterministic baseline. This clarified that the source of VLOG’s promising performance is two-fold: oracle guiding and stochastic modeling. Finally, we publish the dataset of Mahjong for offline RL and the corresponding RL environment so as to facilitate future research on oracle guiding.
ACKNOWLEDGEMENT
This work was supported by Microsoft Research Asia. Kenji Doya was supported by Japan Society for the Promotion of Science KAKENHI Grant Numbers JP16K21738, JP16H06561 and JP16H06563, as well as by Okinawa Institute of Science and Technology.
REPRODUCIBILITY STATEMENT
The source code of VLOG can be found in Supplementary Material.
ETHICS STATEMENT
We declare no conflict of interests. We tried to use friendly colors to people with color recognition disabilities (Fig. 2C, D) and distinguishable markers for performance curves of different models (Fig. 2B, Fig. 3 and Fig6). Our Mahjong dataset was generated using downloadable, public game replay data from Tenhou.net with post-processing. The dataset contains no private information about players. Since VLOG is a general framework for leveraging the oracle information, we cannot foresee any direct application of VLOG to malicious purposes. However, any new RL algorithm might confer increased autonomy on an agent, and eventually lead to a completely autonomous agent, which can be used for malicious purposes, e.g., fully autonomous soldiers.
B RL ALGORITHMS AND HYPER-PARAMETERS
B.1 RL ALGORITHMS
B.1.1 DUELING DOUBLE DQN FOR MAZE AND MINATAR TASKS
As we discussed in Sec. 5, we used double DQN with dueling network architecture (van Hasselt et al., 2016; Wang et al., 2016) as the base RL algorithm, because it work relatively well (Hessel et al., 2018) without introducing additional hyper-parameter.
The Dueling architecture of DQN (Wang et al., 2016) is defined as follows (see Appendix C for hidden layer size):
c l a s s DuelingQNetwork ( nn . Module ) : def i n i t ( s e l f , i n p u t s i z e , ac t ion num , h i d d e n l a y e r s ) :
super ( DuelingQNetwork , s e l f ) . i n i t ( ) s e l f . i n p u t s i z e = i n p u t s i z e s e l f . a c t i o n n u m = a c t i o n n u m s e l f . h i d d e n l a y e r s = h i d d e n l a y e r s s e l f . ne twork modu le s = nn . Modu leL i s t ( ) l a s t l a y e r s i z e = i n p u t s i z e f o r l a y e r s i z e in h i d d e n l a y e r s :
s e l f . ne twork modu le s . append ( nn . L i n e a r ( l a s t l a y e r s i z e , l a y e r s i z e ) ) s e l f . ne twork modu le s . append ( nn . ReLU ( ) ) l a s t l a y e r s i z e = l a y e r s i z e
s e l f . v a l u e l a y e r = nn . L i n e a r ( l a s t l a y e r s i z e , 1 ) s e l f . a d v a n t a g e l a y e r = nn . L i n e a r ( l a s t l a y e r s i z e , a c t i o n n u m ) s e l f . ma in ne twork = nn . S e q u e n t i a l (* s e l f . ne twork modu le s )
def f o r w a r d ( s e l f , x ) : h = s e l f . ma in ne twork ( x ) v = s e l f . v a l u e l a y e r ( h ) . r e p e a t i n t e r l e a v e ( s e l f . ac t ion num , dim = −1) q0 = s e l f . a d v a n t a g e l a y e r ( h ) a = q0 − t o r c h . mean ( q0 , dim = −1 , keepdim=True ) . r e p e a t i n t e r l e a v e (
s e l f . ac t ion num , dim = −1) q = v + a re turn q
Double deep Q-learning (van Hasselt et al., 2016) was used to compute Qtarget in Fig. 1 A (one can use any other algorithm to compute Qtarget without changing other parts). In particular, as in Wang et al. (2016), we have
Qtargett = rt + γQ(zt, argmax a′ Q(zt+1, a ′; θ); θ−),
where rt is the reward at step t, γ is the discount factor (Table 3), θ denotes the parameters of the Q network (MLP decoder) for computing the Q-function (Appendix C). Note that z is given by the posterior decoder with oracle observation x as input, since the oracle prediction error term Eq(zt|x̂t) [logP (v(zt) = vtart |zt)] in Eq. 1 is the expectation over the posterior distribution q(z|x̂). Following deep RL normals (Mnih et al., 2015; Wang et al., 2016; van Hasselt et al., 2016), we used a target Q network with the same structure as the original Q network, of which the parameters are denoted as θ− (Table 3). Every 1,000 steps, the target Q network copies the parameters from the original Q network (Table 3). Then the first term of the VLOG loss function (Eq. 2) is simply given by the mean square error between Qtarget and the output of Q network (MLP decoder) for the Maze and MinAtar tasks.
B.1.2 DUELING DOUBLE DQN WITH CONSERVATIVE Q LEARNING FOR MAHJONG
In Mahjong, as we transfer to offline RL domain (Sec. 5.3), directly using off-policy RL algorithm usually results to very unsatisfying performance (Levine et al., 2020).
Therefore, we complement the loss function of VLOG (Eq. 2) with an auxiliary conservative Qlearning (CQL) loss (Kumar et al., 2020),
JCQL = αEx̂,a∼D [ log
∑ a′∈A exp(Q(z(x̂), a′, θ))− [Q(z(x̂), a, θ)]
] ,
where D is the offline dataset we used for Mahjong and α = 1. The combined loss function used for Mahjong (CQL) is J βVLOG + JCQL.
B.2 HYPER-PARAMETER SELECTION
We summarize the hyper-parameters in Table 3.
B.3 SENSITIVITY ANALYSIS FOR β
As we mentioned in Sec. 4.2, the coefficient β is important to the learning of VLOG agents. If we fix the value of β throughout training, a too large or too small β will result in worse performance. The corresponding results are shown in Fig. 6.
C NETWORK STRUCTURES
For simplicity, we use the same hidden layer size for all the fully connected layers, where the hidden layer size is 256 for maze and MinAtar, and 1024 for Mahjong.
C.1 ENCODER
Since we targeted at various tasks, we used different encoder network for each type of environment. The prior and posterior encoder has the same structure except different sizes of input features/channels.
For the maze task, the encoder was a 2-layers MLP with ReLU activation. The output size is also equal to hidden layer size.
For the MinAtar tasks, we used 2-D CNN encoder defined as follows:
import t o r c h . nn as nn c n n m o d u l e l i s t = nn . Modu leL i s t ( ) c n n m o d u l e l i s t . append ( nn . Conv2d ( n c h a n n e l s , 16 , 3 , 1 , 0 ) ) c n n m o d u l e l i s t . append ( nn . ReLU ( ) ) c n n m o d u l e l i s t . append ( nn . Conv2d ( 1 6 , 32 , 3 , 1 , 0 ) ) c n n m o d u l e l i s t . append ( nn . ReLU ( ) ) c n n m o d u l e l i s t . append ( nn . Conv2d ( 3 2 , 128 , 4 , 2 , 0 ) ) c n n m o d u l e l i s t . append ( nn . ReLU ( ) ) c n n m o d u l e l i s t . append ( nn . Conv2d ( 1 2 8 , 256 , 2 , 1 , 0 ) ) c n n m o d u l e l i s t . append ( nn . ReLU ( ) ) c n n m o d u l e l i s t . append ( nn . F l a t t e n ( ) ) c n n m i n a t a r = nn . S e q u e n t i a l (* c n n m o d u l e l i s t )
where n channels is the number of channels of executor or oracle observation. The output size is equal to hidden layer size.
For Mahjong, because the second dimension of observation (tile ID) has local contextual relationship (Li et al., 2020), we used 1-D CNN (convolution along the tile ID dimension) as the encoders, defined as follows:
c n n m o d u l e l i s t = nn . Modu leL i s t ( ) c n n m o d u l e l i s t . append ( nn . Conv1d ( n c h a n n e l s , 64 , 3 , 1 , 1 ) ) c n n m o d u l e l i s t . append ( nn . Conv1d ( 6 4 , 64 , 3 , 1 , 1 ) ) c n n m o d u l e l i s t . append ( nn . ReLU ( ) ) c n n m o d u l e l i s t . append ( nn . Conv1d ( 6 4 , 64 , 3 , 1 , 1 ) ) c n n m o d u l e l i s t . append ( nn . ReLU ( ) ) c n n m o d u l e l i s t . append ( nn . Conv1d ( 6 4 , 32 , 3 , 1 , 1 ) ) c n n m o d u l e l i s t . append ( nn . ReLU ( ) ) c n n m o d u l e l i s t . append ( nn . F l a t t e n ( ) ) cnn mahjong = nn . S e q u e n t i a l (* c n n m o d u l e l i s t )
The output size is 1088, close to hidden layer size.
C.2 LATENT LAYER
For VLOG and VLOG (no-oracle), the size of z layer is half of hidden layer size because we need to estimate both the mean and the variance of z. For all the other models (baseline, oracle, OPD-style, Suphx-style), the latent layer is one fully connected layer with size hidden layer size and ReLU activation.
C.3 DECODER
The decoders for all models were 2-layers MLPs with size hidden layer size. Except BC on Mahjong, the input of decoder was the output of the latent layer concatenated with the action, and we used the dueling Q-network structure (Wang et al., 2016) to output a scalar Q value. ReLU activation was used except for the output.
For BC on Mahjong, the input of decoder was the output of the latent layer. The outputs of decoder was logit of actions, and the actions could be obtained using softmax.
D TASK-AGNOSTIC LATENT BOTTLENECK CONTROL
As we discussed in Sec. 4.2, the coefficient β of the regularization term in the VLOG loss function (Eq. 2) is adaptively regularized by Eq. 3, provided with DtarKL, which is another hyper-parameter. While our experiments demonstrated the effectiveness of this approach, the follows discuss more thoughts behind this design choice.
In principle, replacing a hyper-parameter with another does not always make training easier. However, in practice (especially deep RL), the performance will be highly sensitive to some hyperparameters (e.g., the entropy coefficient α in the original soft-actor critic algorithm (Haarnoja et al., 2018) need to be tuned in each robotic task. This is because the reward magnitude is different among tasks.). As a result, it will be beneficial to replace a sensitive hyper-parameter with another one that does not need fine tuning. For example, in the follow-up paper of soft-actor critic, the authors used adaptive entropy coefficient α by introducing another hyper-parameter, the entropy target (Haarnoja et al., 2019). They empirically found that it is good to set the entropy target equal to the negative of the degree of freedom of the agent, thus to avoid tuning α.
Our idea of replacing β with DtarKL is due to similar reasons. One obvious problem is that the magnitude of the “oracle prediction error” term (Eq. 2) relies on the reward magnitude of the task. Therefore β should also be adjusted to match with the magnitude of task reward. However, DtarKL is only relevant to the magnitude of prior z and posterior z, which does not differ too much among tasks (usually at the order of 1). In practice, we found that DtarKL = 50 works well for all the tasks including Maze, MinAtar and Mahjong.
Another way of regularizing β is to employ a linear or exponential scheduler. For example, in Burgess et al. (2017), the authors used a linear scheduler for the target KL-divergence and got good
results. However, using a scheduler introducing more hyper-parameters (at least two: initial β and final β), which is against our intention to reduce the impact of hyper-parameters.
E SEAQUEST LOCAL OPTIMUM
In Seaquest, the agent drives a submarine diving into the sea to shoot at enemies and rescue divers to earn scores. However, the submarine has limited oxygen. It must surface to replenish the oxygen before running out for surviving, by doing which it temporarily cannot earn scores. A typical local optimum is to use the last remaining oxygen for diving instead of surfacing.
F MAHJONG
F.1 ESTIMATION OF GAME COMPLEXITY OF MAHJONG
We consider 4-players Japanese Mahjong4. Although there are minor variants of rules, the following estimation applies to general cases.
For easier computation, we make two major simplifications (i.e., we estimate a lower bound of the game complexity): (1) melding from discard5 is not considered and (2) the information other than tiles, such as results of contextual games, points of players, is not considered.
There are 34 types of tiles, each with 4 duplicates (so totally 136 tiles). We further restrict our estimation to the last turn (i.e., the last tile is drawn). Among 136 tiles, 53 tiles are in someone’s hand (the 4 players have 14, 13, 13, 13 tiles in hands, respectively). Permutation of tiles in one’s hand does not make a different, while the permutation matters if not in one’s hand. The number of distinguishable configurations of 136 tiles thus can be computed as 136!(13!)3×14!×(4!)34 ∼ 10 145
Meanwhile, for each discarded tile, it is important to know whether it was discarded immediately after being drawn or not. For 70 discarded tiles, the number of possibilities is simply 270 ∼ 1021. Therefore, the lower bound of game complexity of Mahjong is estimated as
10145 × 1021 ∼ 10166.
If considering the other being simplified information, the state space could be much larger. For example, let’s consider the current points of each player. The most common rule is that each player starts with 25,000 points, with 100 points as the minimal unit (totally 1,000 units), and the game terminates if someone got negative points. So the number of possibilities can be converted to the answer of “how many ways to distribute 1000 candies to 4 kids”, which is (1000+1) (4−1)
(4−1)! ∼ 10 8.
F.2 DETAILS OF OBSERVATION SPACE AND ACTION SPACE ENCODING
In our Mahjong environment6, the action space is composed of 47 discrete actions (Table 5). Because not all actions are available at a certain step, we also provide the set of valid actions among all 47 actions at each step according to the rules, which can be used during playing and learning.
The oracle observation has the shape of 111 × 34 (executor observation is a part of oracle observation, with shape 93 × 34) (Fig. 5). The first dimension corresponds to 111 features (channels). The second dimension of observation (with size 34) corresponds to 34 Mahjong tiles (the order is Character 1-9, Dot 1-9, Bamboo 1-9, East, South, West, North, White, Green, Red). We used 1-D CNNs with convolution along the second dimension for the encoders. Suppose the current player is player 0, and other players are numbered 1, 2, 3 counter-clockwise. The value of any element in an observation is 1 or 0, explained in Table 4.
4https://en.wikipedia.org/wiki/Japanese Mahjong 5A meld is a specific pattern of three or four tiles. A player can pick up a discarded tile from others to form a meld by displaying the meld to public, if certain condition is satisfied. 6The Mahjong environment we used in this papers is available on https://github.com/pymahjong/pymahjong for reproducibility. However, we recommend to use the newer version https://github.com/Agony5757/mahjong which is better-supported by the authors and much faster.
F.3 DATA AUGMENTATION
In (Japanese) Mahjong, there are 3 suit sets of (characters, dots, bamboos) tiles, 4 wind tiles and 3 dragon tiles. The 3 suit sets are symmetric and thus exchangeable with each other7. So are the 4 wind tiles and the 3 dragon tiles. According to such symmetry, we augmented the offline RL dataset by randomly exchanging the 3 suit sets, the 4 wind tiles and the 3 dragon tiles, respectively.
7There is one exemption of winning hand pattern “all green”. Since it is an extremely rare case, we simply ignore it. | 1. What is the focus of the paper regarding reinforcement learning with training-time privileged information?
2. What are the strengths and weaknesses of the proposed approach, particularly in its application of variational Bayes?
3. Do you have concerns about the theoretical analysis of the proposed method, especially when compared to prior works?
4. How does the reviewer assess the clarity of certain aspects of the paper, such as Equation (1)?
5. What questions do you have regarding the experimental results, particularly for the Mahjong task? | Summary Of The Paper
Review | Summary Of The Paper
In this paper, reinforcement learning (RL) with training-time privileged information is studied. This scenario is the relaxation of traditional POMDP by allowing additional observations to be available during training time. A variational Bayes approach is proposed under this scenario. The main idea lies in learning a joint latent space for both testing-stage partial observations and the training-stage full observations. Experiments over several benchmark tasks, including Mahjong, a large-scale multi-player game with imperfect information are conducted to verify the performance of the approach.
Review
Strengths of the paper:
Adopting variational Bayes is an interesting and reasonable approach to make use of the training-time priviledged information.
Weakness of the paper:
In fact, the studied problem is not absolutely new. There have been similar settings studied under imitation learning (IL), such as [1]. Furthermore, applying variational Bayes for latent space learning is also not a very novel idea in both RL and IL.
According to the paper, one of the major drawbacks for previous approaches in this problem is lacking theoretical guarantees. However, in my view, the variational Bayes is also not possible to work in all situations. Why and when the proposed approach would indeed work? The theoretical analysis is missing in the paper. Even though the experimental results show some advatange of the proposed approach, after reading the paper, I still don't get solid justifications on why variational Bayes could outperform previous approaches in principle.
Some descriptions could be clearer:
In Equation (1), how
v
t
t
a
r
is obtained? The paper just says that it could be calculated by any policy evaluation method. But policy iteration relies on knowing the transition model and the reward function. This is not assumed in the paper.
In the Mahjong experiment, I wonder why VLOG-no oracle could improve over the comparison methods. According to Equation (1), if no oracle observation is available, VLOG reduces to minimizing the maximum-likehood loss of fitting the value function. I thought it would have no difference to the baseline method, i.e. vallina policy learning.
Reference: [1] Learning by cheating. https://arxiv.org/pdf/1912.12294.pdf |
ICLR | Title
Variational oracle guiding for reinforcement learning
Abstract
How to make intelligent decisions is a central problem in machine learning and artificial intelligence. Despite recent successes of deep reinforcement learning (RL) in various decision making problems, an important but under-explored aspect is how to leverage oracle observation (the information that is invisible during online decision making, but is available during offline training) to facilitate learning. For example, human experts will look at the replay after a Poker game, in which they can check the opponents’ hands to improve their estimation of the opponents’ hands from the visible information during playing. In this work, we study such problems based on Bayesian theory and derive an objective to leverage oracle observation in RL using variational methods. Our key contribution is to propose a general learning framework referred to as variational latent oracle guiding (VLOG) for DRL. VLOG is featured with preferable properties such as its robust and promising performance and its versatility to incorporate with any value-based DRL algorithm. We empirically demonstrate the effectiveness of VLOG in online and offline RL domains with tasks ranging from video games to a challenging tilebased game Mahjong. Furthermore, we publish the Mahjong environment and an offline RL dataset as a benchmark to facilitate future research on oracle guiding1.
1 INTRODUCTION
Deep reinforcement learning (DRL) has undergone rapid development in recent years (Sutton & Barto, 2018; Mnih et al., 2015; Vinyals et al., 2019). However, there is a common and important but under-explored aspect in RL: imagine that after playing a Poker game, a human player may look at the replay to check opponents’ hands and analyze this information to improve his/her playing strategy (or policy) for the next time. We refer to the information like opponents’ hands as oracle observation, defined as the information invisible to the agent during online task execution but available during offline training. By contrast, the information available during task execution is called executor observation. Such a scenario has been referred to as oracle guiding for RL (Li et al., 2020; Fang et al., 2021) (see Sec. 3 for a formal definition). Oracle guiding is common in real life. For example, when taking an examination (the oracle observation is the answers to similar questions, which are available only during preparing); and training a robot to perform some tasks on the Moon (when training the robot, we can provide it with the information of the territory, which are not available during execution). The type of oracle observation can be diverse, including hindsight information (Harutyunyan et al., 2019; Guez et al., 2020), human feedback (Knox & Stone, 2009; Loftin et al., 2016; MacGlashan et al., 2017), re-calibrated data with post-processing, and hidden states in a partially observed setting (Li et al., 2020).
While humans naturally perform oracle guiding when learning to make decisions, it remains challenging in RL. The difficulties include: (1) how to guarantee that learning with oracle observation
∗Work done during an internship in Microsoft Research Asia. Email: dongqi.han@oist.jp 1https://github.com/Agony5757/mahjong
improves the main decision model using executor observation only, and (2) if introducing an auxiliary loss leveraging oracle observation, how to tradeoff between the main loss and auxiliary loss. While recent studies attempted to model oracle guiding in RL (Guez et al., 2020; Li et al., 2020; Fang et al., 2021), none of them addressed these difficulties (refer to the Related Work section for more details). In particular, all these proposed methods are heuristic: although empirical results showed performance gain with oracle guiding, it is not theoretically guaranteed that the usage of oracle observation improves execution performance.
In this paper, we propose a fundamentally new idea for oracle guiding based on Bayesian theory. Taking Poker as an example, we know that the learning of optimal strategy is tractable if knowing the global, true state of the environment (or simply state2), including all visible or invisible cards, the opponents’ playing style, etc. (Azar et al., 2017; Jin et al., 2018; 2020). A key part of skill improvement is learning to estimate the probabilistic distribution of environmental state from executor observation. The common way to do this by human experts is to watch match replays where the oracle observation (e.g. opponents’ hands) is available, and then use the oracle-estimated state to correct the executor-estimated state. We interpret this to Bayesian language: executor-estimated state as the prior distribution, and the oracle-estimated one as the posterior distribution. Thus, the training objective can be considered two-fold: learning to make decisions based on posterior estimation of state, and learning a prior distribution of state closer to the posterior one.
We formulate this idea by proposing a novel learning framework for general oracle guiding problems based on variational Bayes (VB) (Kingma & Welling, 2014), referred to as variational latent oracle guiding (VLOG). VLOG owns several preferable properties. First, VLOG is theoretically guaranteed to leverage oracle observation for improving the decision model using executor observation. Second, VLOG is a versatile DRL framework that can be integrated into any value-based RL algorithm and is agnostic to the type of oracle observation. Third, VLOG does not suffer from the necessity of tuning additional hyper-parameters. Finally, we empirically show that VLOG contributes to better performance in a variety of decision-making tasks in both online and offline RL domains. The tasks include a simple maze navigation, video games playing, and a particularly challenging tile-based game Mahjong in which humans heavily leverage oracle observation in learning (Li et al., 2020). We also contribute to the community by taking Mahjong as a benchmarking task for oracle guiding and publishing the RL environment and dataset to facilitate future research.
2 RELATED WORK
In the past few years, research interests have grown on DRL and imitating learning (Chen et al., 2020) with leveraging oracle or hindsight information. For DRL, Guez et al. (2020); Fang et al. (2021) considered hindsight observation (executor observation at future steps) as the oracle observation during training. Guez et al. (2020) used hindsight observation to facilitate learning a representation of current state. Another method (Fang et al., 2021) was used for stock trading: the authors trained a teacher (oracle) policy with hindsight information, and employed network distillation to make the student policy behaves more similar to the teacher policy. Both the methods (Guez et al., 2020; Fang et al., 2021) are heuristic and focused on making use of future observation for a better sequential modeling, while VLOG is theoretically guaranteed for any kind of oracle observation. For applications in imperfect-information games, Suphx (Li et al., 2020), a DRL-based AI for Mahjong, also introduced a method to leverage oracle observation (opponents’ hand) for stronger performance. They concatenate oracle observation with executor observation as the input of policy network, where the oracle observation is timed by a scalar variable which is annealed from 1 to 0 during the training course. However, the method used in Li et al. (2020) is also heuristic and has only been tested in one task.
Variational Bayes (VB) is a well-established method and has been taken advantage of in RL. For example, control as probabilistic inference uses VB to connect the objective function of RL and variational lower bound of a probabilistic inference problem (Furmston & Barber, 2010; Weber et al., 2015; Levine, 2018). Our idea differs since it frames a value regression problem by the maximumlikelihood problem, and then, it applies VB to solve it (see Sec. 4). Also, the usage of VBian network models for DRL has captured attention from researchers recently. For example, Ha & Schmidhuber (2018) proposed to employ a VAE to reduce the high dimensionality of image observation; Igl et al.
2Generally, oracle observation does not necessarily contain all the information of environmental state.
(2018); Han et al. (2020); Lee et al. (2020) proposed variational RNNs as the state-transition models to encode the belief states of the agent; Yin et al. (2021) utilized a variational sequential generative model for predicting future observation and used the prediction error to infer intrinsic reward to encourage exploration; and Okada et al. (2020) demonstrated performance gain by using a deep Bayesian planning model in continuous control tasks. Our study differs from the mentioned works by focusing on oracle guiding, and VLOG does not involve learning a state transition model.
3 ORACLE GUIDING POMDP
Here we define the problem based on the Partially Observable Markov decision process (POMDP) (Sondik, 1978). An oracle guiding POMDP distinguishes from the original POMDP by having two types of observations: executor and oracle. The executor observation x is always available to the agent, on which the agent’s decision making (execution) relies. The oracle observation x̂ is not accessible during execution, but can be obtained afterward. x is included in x̂ since the former is always available. Thus x̂ contains no less information of the underlying environment state than x.
A formal definition of oracle guiding POMDP is a tuple 〈S,A,P0, T ,X, X̂,O, Ô, γ〉, where S and A are the state and action spaces, respectively. P0 specifies the initial state distribution such that P0(s) is the probability of a state s ∈ S being an initial state. T specifies the state transition probability such that T (s′, r|s, a) is the probability of reaching to a new state s′ ∈ S with an immediate reward r ∈ R after taking an action a ∈ A at a state s ∈ S. X denotes the executor observation space and X̂ denotes the oracle observation space. O specifies the executor observation probability such that O(x|s) is the probability of an executor observation x ∈ X at a state s ∈ S. Similarly, Ô specifies the oracle observation probability such that Ô(x|s) is the probability of an oracle observation x̂ ∈ X̂ at a state s ∈ S. γ ∈ [0, 1) is the discount factor. Value functions are the expected value of return from a particular state or state-action pair. For a policy π, its Q-function is defined by qπ(s, a) := Eπ[ ∑∞ n=0 γ
nrt+n|st = s, at = a], where Eπ indicates the expectation when the policy π is followed. The state value function vπ is defined by vπ(s) := Eπ(a|s)[qπ(s, a)]. The agent aims to maximize EP0(s)vπ(s) with respect to π, and the value-functions play an important role to this end (Sutton & Barto, 2018).
4 VLOG: VARATIONAL LATENT ORACLE GUIDING
Let us introduce a latent vector zt, which is a probabilistic variable representing the environmental state st. From a Bayesian perspective, we consider the prior distribution p(zt|xt) as the agent’s estimated probability density function (PDF) for zt based on executor observation xt. Meanwhile, the posterior distribution PDF q(zt|x̂t) is modeled based on oracle observation x̂t. In RL, the most basic requirement is to make a good estimation of return by a value function approximator v(xt) := ∫ v(zt)p(zt|xt)dzt (we denote it by v, but it can also be a Q function by simply replacing xt with (xt,at)) based on available information, i.e. executor observation xt. The target of return, denoted by vtart , can be estimated by any value learning algorithm such as TD(0) (Sutton & Barto, 2018), Peng’s Q(λ) (Kozuno et al., 2021), etc. (generally, vtar can always be given by Bellman equations. However, one can also use Monte-Carlo return as vtar if available). In particular, we employed double Q-learning with dueling architecture (Wang et al., 2016) to compute vtar due to its effectiveness and simplicity (Sec. 5 and B.1). We want to maximize the log-likelihood objective of the estimation of return based on executor observation xt (i.e., for the executor model)
L := logP ( v(xt) = v tar t |xt ) = log ∫ zt P ( v(zt) = v tar t |zt ) p(zt|xt)dzt
= log ∫ zt q(zt|x̂t) q(zt|x̂t) P ( v(zt) = v tar t |zt ) p(zt|xt)dzt.
By Jensen’s inequality, we have L ≥ ∫ zt q(zt|x̂t) log P (v(zt) = vtart |zt) p(zt|xt) q(zt|x̂t) dzt
= ∫ zt [ q(zt|x̂t) logP ( v(zt) = v tar t |zt ) − q(zt|x̂t) log q(zt|x̂t) p(zt|xt) ] dzt
= Eq(zt|x̂t) [ logP ( v(zt) = v tar t |zt )]︸ ︷︷ ︸ oracle prediction error −DKL (q (zt|x̂t)‖p (zt|xt))︸ ︷︷ ︸ regularization term := LVLOG. (1)
Thus we can maximize our initial objectiveL via maximizingLVLOG, which is also known as the variational lower bound (Kingma & Welling, 2014), but in our oracle-guiding scheme. Since p(zt|xt) and q(zt|x̂t) represents the PDF of the latent vector obtained from executor observation and oracle observation, respectively, the meanings of the two terms in LVLOG appear clear now: the first term, i.e., oracle prediction error, helps to improve value estimation from posterior latent state distribution (zt computed with oracle observation); and the second term, i.e., regularization term, helps to shape the prior representation of zt closer to the posterior one as latent oracle guiding. We would like to highlight that the VLOG objective is the lower bound of the objective for the prior executor model v(xt) (the estimation of return using executor observation xt) with the usage of oracle observation x̂t. This lower bound guarantees that the usage of oracle observation facilitates the learning of the executor model, which is our original motivation. Remark 1. One may use any shape of the approximate posterior q, depending on which different instance of VLOG is possible. Furthermore, one may directly use v(xt) instead of p(zt|xt). These design choices allow users to incorporate any prior knowledge on oracle observation. For example, if one knows the range of a state-value at xt is a closed interval [l, u], the approximate posterior q(vt|x̂t) can be restricted to a family of probability distributions supported on [l, u].
4.1 IMPLEMENTATION WITH NEURAL NETWORKS
Inspired by the implementation of variational auto-encoder (VAE, Kingma & Welling (2014)), we propose the neural network architecture of VLOG (Fig. 1). The executor observation xt and oracle observation x̂t are processed by two distinct encoder networks to compute the prior and posterior distribution of the latent vector, respectively. During training, both xt and x̂t are available, and all the network parameters are updated by maximizing the VLOG objective in an end-to-end manner (Fig. 1A). During execution, the agent computes prior distribution p(z|xt) for decision making (Fig. 1B) without using oracle observation. zt is computed by parameterized normal distribution
p(zt|xt) = N (µpt , exp (logσ p t )), (µ p t , logσ p t ) = prior encoder(xt), q(zt|x̂t) = N (µqt , exp (logσ q t )), (µ q t , logσ q t ) = posterior encoder(x̂t).
For computing P(v(zt) = vtart |zt) in Eq. 1, we simply assume it follows normal distribution, and estimate it with mean square error between v(zt) and vtart in practice. The reparameterization trick is used to perform end-to-end training as in VAE (Kingma & Welling, 2014). Then the output of the decoder (value function) can be obtained by v(zt) = decoder(zt). Note that zt is obtained using the posterior encoder during training, and using the prior encoder during execution (Fig. 1A, B).
4.2 TASK-AGNOSTIC LATENT BOTTLENECK CONTROL
To learn a better representation, we borrow the idea from β-VAE (Higgins et al., 2016) to multiply a coefficient β to the regularization term. Thus we have the loss function (negative lower bound) as
J βVLOG = −Eq(zt|x̂t) [ logP ( v(zt) = v tar t |zt )] + βDKL (q (zt|x̂t)‖p (zt|xt)) . (2)
The hyper-parameter β controls the capacity of the latent information bottleneck (Tishby & Zaslavsky, 2015; Alemi et al., 2017). We found the choice of β is important for the performance of VLOG in RL (see Appendix B.3). However, having extra hyper-parameters are not desired. Inspired by the method used in Burgess et al. (2017) for controlling the scale of KL divergence in β-VAE, we propose a task-agnostic method to automatically adjust β by setting a target of KL divergence DtarKL. In particular, we minimize the auxiliary loss function (β as the being optimized parameter)
Jβ = (log10DtarKL − log10DKL (q (zt|x̂t)‖p (zt|xt))) log(β). (3)
The intuition here is to strengthen the regularization by increasing β when the divergence between prior and posterior is too large, and vice versa. This method is similar to that used in soft actor-critic for automatically adjusting the entropy coefficient (Haarnoja et al., 2019), while we used this for KL divergence coefficient. Importantly, we found a well-performing value DtarKL = 50 agnostic to other design choices. It worked well in a range of different tasks and networks (Sec. 5). Therefore, we do not need to tune β. We provide more discussion about this method in Appendix D.
5 EXPERIMENTS
How does VLOG perform in practice? We investigated the empirical performance of VLOG in three types of tasks using online or offline RL, from simple to difficult. In the following experiments, we used double DQN with dueling network architecture (van Hasselt et al., 2016; Wang et al., 2016) as the base RL algorithm, the model and loss functions for RL are defined in Appendix. B.1. As DRL is susceptible to the choice of hyper-parameters, introducing any new hyper-parameters might obscure the effect of oracle guiding. Double DQN and dueling architecture are preferable for the base algorithm since they require no additional hyper-parameters, in contrast to other DQN variants (Hessel et al., 2018), such as prioritized experience replay (Schaul et al., 2016), noisy network (Fortunato et al., 2018), categorical DQN (Bellemare et al., 2017), and distributed RL (Kapturowski et al., 2018). Importantly, we used the same hyper-parameter setting for all methods and environments as much as possible (see Appendix B.2).
5.1 MAZE
We first demonstrate how VLOG helps to shape the latent representation by leveraging oracle observation in learning. The testbed is a maze navigation task3 (Fig. 2A) with 10×10 grids. The executor observation is the (x, y) position, where x, y are continuous values randomly sampled within each grid of the maze (thus the observations in two adjacent but wall-separated grids may be very close). At each step, the agent selects an action (going up, down, right, or left) and moves to another grid if not blocked by wall. We provided the VLOG agent with oracle observation (xc, yc, dg) during training, where xc, yc are coordinates of the center of the current grid, and dg is the (shortest) path distance to goal from the current grid. It is intuitive that although the raw observation is (x, y), dg matters more in a maze navigation task. We empirically investigated how much such oracle observation used in learning of VLOG could help to shape the latent representation of z w.r.t. dg rather than position. The encoder and decoder were both 2-layers multi-layer perceptrons (MLP) with width 256 and ReLU activation. The size of latent vector zt for VLOG was 128 since we computed both µ and σ (Appendix C).
Experiments show that the baseline agent struggled to reach the goal (Fig. 2B), while VLOG agents stably solved the task after learning. To check how the usage of VLOG affected the learned latent representation, we visualize the latent representation of both VLOG and baseline models with principal component analysis (PCA, Pearson F.R.S. (1901)). In Fig. 2C, we map path distance to goal dg to color and plot the scores of the first 2 PCs of z (computed using executor observation)
3https://github.com/MattChanTK/gym-maze
for VLOG and the corresponding latent state for baseline using successful trials (“latent layer” in Fig. 1C). The latent state of VLOG showed a relatively smoother and more interpretable representation of distance-to-goal comparing to that of baseline. We then plot the latent representations for different positions in the maze in Fig. 2D. The latent state of VLOG more clearly represented dg , consistent with the result (Fig. 2C). In particular, we looked into a rectangle region (denoted by rectangles in Fig. 2D) inside which the left 2 grids and right 2 grids are segregated by a wall. We found the corresponding areas in the latent PC space and circled them in Fig. 2C. While these 4 grids are close in (x, y) (executor observation), their distances-to-goal (oracle observation) are highly distinguished. By leveraging oracle guiding using VLOG, the agents can clearly differentiate the left 2 grids and the right 2 grids in latent space as shown in Fig. 2C, D, left (note that the latent state z here of VLOG was computed using executor observation only). By contrast, the latent representations of these grids were overlapped for the baseline model, which did not utilize the oracle observation (Fig. 2C, D, right). In sum, we demonstrated with a toy example that VLOG effectively helped the latent space to couple with oracle state useful for the task. The following sections will transfer to experiments on more complicated tasks and discuss how VLOG can improve practical performance.
5.2 NOISY MINATAR
To evaluate how VLOG scales with more high-dimensional state space, we tested it on a set of MinAtar videos games. MinAtar (Young & Tian, 2019) is a test platform for AI agents, which implements 5 miniaturized Atari 2600 games with discrete actions (Seaquest, Breakout, Space Invaders, Freeway and Asterix). MinAtar is inspired by Arcade Learning Environment (Bellemare et al., 2013) but simplifies the environments for efficiency. The observation is 10×10 pixels with multiple channels indicating different objects. In real-world, the observation usually contains some noise. Thus it is natural to consider the noisy observation as the partially-observable executor observation, and the original, non-noisy observation as the oracle one. Suppose that at each frame, each pixel may “break” randomly with an independent probability of 1/8 (Fig. 3A). The original observation at a broken pixel is erased and is replaced by a different value in all channels. We consider such noisy MinAtar environments with the noisy pixels as the executor observation and the original pixels as the oracle observation.
The network structure was the same as that for Maze, but the encoder was replaced by a CNN (Appendix C). We ran experiments on all the 5 environments of MinAtar with VLOG as well as
baseline, oracle and alternative oracle guiding methods (see Appendix A for details). Baseline model always uses executor observation as network input (Fig. 1C). Oracle is the same as baseline except that it always receives oracle observation (i.e., cheating, we ran the experiments of oracle for reference). VLOG-no oracle is an ablation of VLOG where we use the executor observation as the input to posterior encoder in VLOG (oracle observation is not used). Suphx-style oracle guiding is the oracle guiding method used in the Mahjong AI Suphx (Li et al., 2020), in which the executor observation and the dropout-ed oracle observation (with dropout probability pdropout) are concatenated as the input to the network. With training proceeds, pdropout is gradually increased from 0 to 1, thus the trained network does not need oracle observation for input (Appendix A). OPD-style oracle guiding is the oracle guiding method used in oracle policy distillation (OPD) (Fang et al., 2021). OPD-style oracle guiding first trains a teacher model using oracle observation as input, and then trains the executor model using an auxiliary loss, which is the error between the executor and the teacher models’ estimation for value function (Appendix A).
The results show that oracle performed usually the best, as expected. We normalized the performance of non-oracle models using oracle model as reference, for more clear comparison (Fig. 3B). Among all the oracle guiding methods (VLOG, OPD-style and Suphx-style), VLOG consistently performed the best. It is notable that VLOG and VLOG-no oracle performed surprisingly well in Seaquest. This can be explained by that Seaquest is a task with a local optimum (see Appendix E), while the stochasticity in hidden states of VLOG helped exploration in latent space to escape from the local optimum (a similar idea is Fortunato et al. (2018), but their noise was added to the network weights). Except in Seaquest, VLOG-no oracle did not show significant performance different with baseline, showing that the performance gain of VLOG in this task set mainly came from leveraging oracle observation for shaping latent distribution; and the usage of variational Bayesian model was, at least not harming the performance when there is no helpful oracle information.
5.3 OFFLINE LEARNING ON MAHJONG
Mahjong is a popular tile-based game with hundreds of millions of players worldwide (here we consider the Japanese variant). The game is like many other card games (but using tiles instead of cards), in which multiple (usually four) players draw and discard tiles (totally 136 tiles) alternatively to satisfy winning conditions. It is a highly challenging game characterized with (1) imperfect information in executor observation (a player cannot see opponents’ private tiles and the remaining tiles to be drawn), (2) stochastic state transitions as in many card games, and (3) extremely high game complexity (i.e., the number of distinguished, legal game states). Complexity of Mahjong is much larger than 10166 (Appendix F.1). For reference, complexity of Go is ∼ 10172 (Silver et al., 2016) and complexity of no-limit Poker is ∼ 10162 (Johanson, 2013). In Mahjong, it is hard to make optimal decisions based on executor observation because the outcomes heavily depend on invisible information, and complexity of invisible state space is as high as 1048 on average (Li et al., 2020). In response to this challenge, Li et al. (2020) introduced suphx-style oracle guiding and demonstrated a performance gain. Thus, we consider Mahjong as a promising test platform for oracle guiding methods. Since the number of possible states in Mahjong is extremely large. It is costly to explore with random actions in an online RL manner, and no pre-
vious work could train a strong Mahjong AI with purely online RL. Also, we would like to examine the effectiveness of VLOG in offline RL settings. For these reasons, we transferred to offline RL (Levine et al., 2020) for the Mahjong task using expert demonstrations.
We processed about 23M steps of human experts’ plays from the online Mahjong game platform Tenhou (https://tenhou.net/mjlog.html) to a dataset for offline RL (data were augmented using the symmetry in Mahjong, see Appendix F). Also, we created a simulator of Mahjong as the testing environment. Though there are sophisticated ways to encodes the state and action space of Mahjong (Li et al., 2020), we attempts to make simplifications with reasonable amounts of approximations since our goal is to not create an strong Mahjong AI, but to use Mahjong as a platform to study oracle guiding problems. In our case, the action space is composed of 47 discrete actions covering all decisions in Mahjong. An executor observation is a matrix encoding public information and the current player’s private hand; An oracle observation concatenates the executor observation with the information of the opponents’ private hands. (see Appendix F). We used an 1-D CNN as the encoder as done in Mahjong AIs commonly (Li et al., 2020), and the size of zt and the decoder network width was increased to 512 and 1024, respectively (Appendix C).
Note that although Mahjong is a 4-player game, using offline RL data to train an agent does not involve multi-agent RL (Zhang et al., 2021) because the offline dataset is fixed: the opponents have fixed policies and thus can be considered as parts of the environment. Our experiments focused on single-agent RL to avoid the complexity caused by considering multi-agent RL. We investigated two kinds of offline RL settings. The first is conservative Q-learning (CQL) (Kumar et al., 2020). Our CQL setting differed from the online RL setting in previous sections by adding auxiliary CQL loss (Kumar et al., 2020) to the Q-learning loss function (Appendix B.1.2). The other is behavior cloning (BC). Although VLOG was designed for value-based RL, we could straightforwardly incorporate VLOG with BC by letting the network predict action instead Q function. The learning was conducted by minimizing the cross entropy between the output and the target action (demonstration) as in a classification problem. Note that we did not test OPD-style oracle guiding in BC setting because it was equal to the baseline since we can directly use demonstration actions as oracle policy for distillation.
Because Mahjong is a zero-sum, four-player game, we tested the performance of trained models in two scenarios: playing with the (trained) baseline model (Table 1) and playing with each other (fight each other, Table 2). In the first scenario, four agents were playing the same matches on the game table, where two of them were being tested agents and the other two were the baseline models. Although each agent played for itself and there was no communication between players, we simply added up the payoff of the two being tested agents, and consider they won a match if one of them ranked top (so the match win rate will be 50% if equally strong as baseline), for statistics (Table 1).
For CQL, the results (Table 1 left and 2 upper) show that VLOG substantially outperformed the baseline and alternative methods (because Mahjong is a highly random game, 55.7% match win rate indicates a large skill gap). Interestingly, VLOG was even comparable to oracle. This can be explained by that VLOG also benefited from its Bayesian property, which is consistent with that VLOG-no oracle showed a significant performance gain over the baseline model (Table 1 left). Still, the oracle model learned to reduce deal-ins (i.e., player discards a tile and another player wins the game by picking up this tile to compose a winning hand) since it could explicitly see the opponents’ private tiles, showing a much lower deal-in rate than other non-cheating models (Table 2 upper).
In BC setting, the agents did not learn a value function, but tried to predict human experts’ actions. Therefore, the training procedure did not involve reasoning the relationship between playing outcome and oracle observation, but just imitating human behaviors. This can be seen from the results that oracle did not substantially outperform baseline in BC (Table 1 right and 2 lower). However, VLOG and VLOG-no oracle still showed performance gain, thanks to the stochastic modeling.
6 SUMMARY
We have proposed VLOG – a variational Bayesian learning framework for leveraging oracle observation to facilitate DRL especially in partially observable environments. VLOG is available for any RL problem in which there is oracle observation that may help the executor to make decisions.
We first introduced a latent vector z to represent the environmental state. The prior and posterior distribution of z is modeled using executor and oracle observation, respectively. Then, we derived a variational lower bound (Eq. 2) by maximizing which we can optimize the executor model using the oracle observation. We developed the corresponding methodology for DRL, which can be incorporated with most RL algorithms that need to estimate a value function.
If oracle observation contains more information to retrieval the true environmental state (or, it is the true environmental state), VLOG’s oracle guiding in latent space helps to shape a latent representation in neural networks closer to the true one. We demonstrated this advantage of VLOG using the maze task. Then, we scaled VLOG up to solve image-based videos games, and compared it with alternative oracle-guiding methods. Though all oracle-guiding methods showed performance gain over the baseline model, VLOG performed consistently the best. Finally, we transferred to offline RL domain using a challenging tile-based game Mahjong in which an executor plays with hidden information and random state transitions, and observed VLOG achieved best overall performance.
We also conducted an ablation study of VLOG (VLOG-no oracle) in which posterior model did not receive oracle observation, but executor one. VLOG-no oracle demonstrated performance gain in the tasks that may benefit from the stochasticity; otherwise, it performs similar to the deterministic baseline. This clarified that the source of VLOG’s promising performance is two-fold: oracle guiding and stochastic modeling. Finally, we publish the dataset of Mahjong for offline RL and the corresponding RL environment so as to facilitate future research on oracle guiding.
ACKNOWLEDGEMENT
This work was supported by Microsoft Research Asia. Kenji Doya was supported by Japan Society for the Promotion of Science KAKENHI Grant Numbers JP16K21738, JP16H06561 and JP16H06563, as well as by Okinawa Institute of Science and Technology.
REPRODUCIBILITY STATEMENT
The source code of VLOG can be found in Supplementary Material.
ETHICS STATEMENT
We declare no conflict of interests. We tried to use friendly colors to people with color recognition disabilities (Fig. 2C, D) and distinguishable markers for performance curves of different models (Fig. 2B, Fig. 3 and Fig6). Our Mahjong dataset was generated using downloadable, public game replay data from Tenhou.net with post-processing. The dataset contains no private information about players. Since VLOG is a general framework for leveraging the oracle information, we cannot foresee any direct application of VLOG to malicious purposes. However, any new RL algorithm might confer increased autonomy on an agent, and eventually lead to a completely autonomous agent, which can be used for malicious purposes, e.g., fully autonomous soldiers.
B RL ALGORITHMS AND HYPER-PARAMETERS
B.1 RL ALGORITHMS
B.1.1 DUELING DOUBLE DQN FOR MAZE AND MINATAR TASKS
As we discussed in Sec. 5, we used double DQN with dueling network architecture (van Hasselt et al., 2016; Wang et al., 2016) as the base RL algorithm, because it work relatively well (Hessel et al., 2018) without introducing additional hyper-parameter.
The Dueling architecture of DQN (Wang et al., 2016) is defined as follows (see Appendix C for hidden layer size):
c l a s s DuelingQNetwork ( nn . Module ) : def i n i t ( s e l f , i n p u t s i z e , ac t ion num , h i d d e n l a y e r s ) :
super ( DuelingQNetwork , s e l f ) . i n i t ( ) s e l f . i n p u t s i z e = i n p u t s i z e s e l f . a c t i o n n u m = a c t i o n n u m s e l f . h i d d e n l a y e r s = h i d d e n l a y e r s s e l f . ne twork modu le s = nn . Modu leL i s t ( ) l a s t l a y e r s i z e = i n p u t s i z e f o r l a y e r s i z e in h i d d e n l a y e r s :
s e l f . ne twork modu le s . append ( nn . L i n e a r ( l a s t l a y e r s i z e , l a y e r s i z e ) ) s e l f . ne twork modu le s . append ( nn . ReLU ( ) ) l a s t l a y e r s i z e = l a y e r s i z e
s e l f . v a l u e l a y e r = nn . L i n e a r ( l a s t l a y e r s i z e , 1 ) s e l f . a d v a n t a g e l a y e r = nn . L i n e a r ( l a s t l a y e r s i z e , a c t i o n n u m ) s e l f . ma in ne twork = nn . S e q u e n t i a l (* s e l f . ne twork modu le s )
def f o r w a r d ( s e l f , x ) : h = s e l f . ma in ne twork ( x ) v = s e l f . v a l u e l a y e r ( h ) . r e p e a t i n t e r l e a v e ( s e l f . ac t ion num , dim = −1) q0 = s e l f . a d v a n t a g e l a y e r ( h ) a = q0 − t o r c h . mean ( q0 , dim = −1 , keepdim=True ) . r e p e a t i n t e r l e a v e (
s e l f . ac t ion num , dim = −1) q = v + a re turn q
Double deep Q-learning (van Hasselt et al., 2016) was used to compute Qtarget in Fig. 1 A (one can use any other algorithm to compute Qtarget without changing other parts). In particular, as in Wang et al. (2016), we have
Qtargett = rt + γQ(zt, argmax a′ Q(zt+1, a ′; θ); θ−),
where rt is the reward at step t, γ is the discount factor (Table 3), θ denotes the parameters of the Q network (MLP decoder) for computing the Q-function (Appendix C). Note that z is given by the posterior decoder with oracle observation x as input, since the oracle prediction error term Eq(zt|x̂t) [logP (v(zt) = vtart |zt)] in Eq. 1 is the expectation over the posterior distribution q(z|x̂). Following deep RL normals (Mnih et al., 2015; Wang et al., 2016; van Hasselt et al., 2016), we used a target Q network with the same structure as the original Q network, of which the parameters are denoted as θ− (Table 3). Every 1,000 steps, the target Q network copies the parameters from the original Q network (Table 3). Then the first term of the VLOG loss function (Eq. 2) is simply given by the mean square error between Qtarget and the output of Q network (MLP decoder) for the Maze and MinAtar tasks.
B.1.2 DUELING DOUBLE DQN WITH CONSERVATIVE Q LEARNING FOR MAHJONG
In Mahjong, as we transfer to offline RL domain (Sec. 5.3), directly using off-policy RL algorithm usually results to very unsatisfying performance (Levine et al., 2020).
Therefore, we complement the loss function of VLOG (Eq. 2) with an auxiliary conservative Qlearning (CQL) loss (Kumar et al., 2020),
JCQL = αEx̂,a∼D [ log
∑ a′∈A exp(Q(z(x̂), a′, θ))− [Q(z(x̂), a, θ)]
] ,
where D is the offline dataset we used for Mahjong and α = 1. The combined loss function used for Mahjong (CQL) is J βVLOG + JCQL.
B.2 HYPER-PARAMETER SELECTION
We summarize the hyper-parameters in Table 3.
B.3 SENSITIVITY ANALYSIS FOR β
As we mentioned in Sec. 4.2, the coefficient β is important to the learning of VLOG agents. If we fix the value of β throughout training, a too large or too small β will result in worse performance. The corresponding results are shown in Fig. 6.
C NETWORK STRUCTURES
For simplicity, we use the same hidden layer size for all the fully connected layers, where the hidden layer size is 256 for maze and MinAtar, and 1024 for Mahjong.
C.1 ENCODER
Since we targeted at various tasks, we used different encoder network for each type of environment. The prior and posterior encoder has the same structure except different sizes of input features/channels.
For the maze task, the encoder was a 2-layers MLP with ReLU activation. The output size is also equal to hidden layer size.
For the MinAtar tasks, we used 2-D CNN encoder defined as follows:
import t o r c h . nn as nn c n n m o d u l e l i s t = nn . Modu leL i s t ( ) c n n m o d u l e l i s t . append ( nn . Conv2d ( n c h a n n e l s , 16 , 3 , 1 , 0 ) ) c n n m o d u l e l i s t . append ( nn . ReLU ( ) ) c n n m o d u l e l i s t . append ( nn . Conv2d ( 1 6 , 32 , 3 , 1 , 0 ) ) c n n m o d u l e l i s t . append ( nn . ReLU ( ) ) c n n m o d u l e l i s t . append ( nn . Conv2d ( 3 2 , 128 , 4 , 2 , 0 ) ) c n n m o d u l e l i s t . append ( nn . ReLU ( ) ) c n n m o d u l e l i s t . append ( nn . Conv2d ( 1 2 8 , 256 , 2 , 1 , 0 ) ) c n n m o d u l e l i s t . append ( nn . ReLU ( ) ) c n n m o d u l e l i s t . append ( nn . F l a t t e n ( ) ) c n n m i n a t a r = nn . S e q u e n t i a l (* c n n m o d u l e l i s t )
where n channels is the number of channels of executor or oracle observation. The output size is equal to hidden layer size.
For Mahjong, because the second dimension of observation (tile ID) has local contextual relationship (Li et al., 2020), we used 1-D CNN (convolution along the tile ID dimension) as the encoders, defined as follows:
c n n m o d u l e l i s t = nn . Modu leL i s t ( ) c n n m o d u l e l i s t . append ( nn . Conv1d ( n c h a n n e l s , 64 , 3 , 1 , 1 ) ) c n n m o d u l e l i s t . append ( nn . Conv1d ( 6 4 , 64 , 3 , 1 , 1 ) ) c n n m o d u l e l i s t . append ( nn . ReLU ( ) ) c n n m o d u l e l i s t . append ( nn . Conv1d ( 6 4 , 64 , 3 , 1 , 1 ) ) c n n m o d u l e l i s t . append ( nn . ReLU ( ) ) c n n m o d u l e l i s t . append ( nn . Conv1d ( 6 4 , 32 , 3 , 1 , 1 ) ) c n n m o d u l e l i s t . append ( nn . ReLU ( ) ) c n n m o d u l e l i s t . append ( nn . F l a t t e n ( ) ) cnn mahjong = nn . S e q u e n t i a l (* c n n m o d u l e l i s t )
The output size is 1088, close to hidden layer size.
C.2 LATENT LAYER
For VLOG and VLOG (no-oracle), the size of z layer is half of hidden layer size because we need to estimate both the mean and the variance of z. For all the other models (baseline, oracle, OPD-style, Suphx-style), the latent layer is one fully connected layer with size hidden layer size and ReLU activation.
C.3 DECODER
The decoders for all models were 2-layers MLPs with size hidden layer size. Except BC on Mahjong, the input of decoder was the output of the latent layer concatenated with the action, and we used the dueling Q-network structure (Wang et al., 2016) to output a scalar Q value. ReLU activation was used except for the output.
For BC on Mahjong, the input of decoder was the output of the latent layer. The outputs of decoder was logit of actions, and the actions could be obtained using softmax.
D TASK-AGNOSTIC LATENT BOTTLENECK CONTROL
As we discussed in Sec. 4.2, the coefficient β of the regularization term in the VLOG loss function (Eq. 2) is adaptively regularized by Eq. 3, provided with DtarKL, which is another hyper-parameter. While our experiments demonstrated the effectiveness of this approach, the follows discuss more thoughts behind this design choice.
In principle, replacing a hyper-parameter with another does not always make training easier. However, in practice (especially deep RL), the performance will be highly sensitive to some hyperparameters (e.g., the entropy coefficient α in the original soft-actor critic algorithm (Haarnoja et al., 2018) need to be tuned in each robotic task. This is because the reward magnitude is different among tasks.). As a result, it will be beneficial to replace a sensitive hyper-parameter with another one that does not need fine tuning. For example, in the follow-up paper of soft-actor critic, the authors used adaptive entropy coefficient α by introducing another hyper-parameter, the entropy target (Haarnoja et al., 2019). They empirically found that it is good to set the entropy target equal to the negative of the degree of freedom of the agent, thus to avoid tuning α.
Our idea of replacing β with DtarKL is due to similar reasons. One obvious problem is that the magnitude of the “oracle prediction error” term (Eq. 2) relies on the reward magnitude of the task. Therefore β should also be adjusted to match with the magnitude of task reward. However, DtarKL is only relevant to the magnitude of prior z and posterior z, which does not differ too much among tasks (usually at the order of 1). In practice, we found that DtarKL = 50 works well for all the tasks including Maze, MinAtar and Mahjong.
Another way of regularizing β is to employ a linear or exponential scheduler. For example, in Burgess et al. (2017), the authors used a linear scheduler for the target KL-divergence and got good
results. However, using a scheduler introducing more hyper-parameters (at least two: initial β and final β), which is against our intention to reduce the impact of hyper-parameters.
E SEAQUEST LOCAL OPTIMUM
In Seaquest, the agent drives a submarine diving into the sea to shoot at enemies and rescue divers to earn scores. However, the submarine has limited oxygen. It must surface to replenish the oxygen before running out for surviving, by doing which it temporarily cannot earn scores. A typical local optimum is to use the last remaining oxygen for diving instead of surfacing.
F MAHJONG
F.1 ESTIMATION OF GAME COMPLEXITY OF MAHJONG
We consider 4-players Japanese Mahjong4. Although there are minor variants of rules, the following estimation applies to general cases.
For easier computation, we make two major simplifications (i.e., we estimate a lower bound of the game complexity): (1) melding from discard5 is not considered and (2) the information other than tiles, such as results of contextual games, points of players, is not considered.
There are 34 types of tiles, each with 4 duplicates (so totally 136 tiles). We further restrict our estimation to the last turn (i.e., the last tile is drawn). Among 136 tiles, 53 tiles are in someone’s hand (the 4 players have 14, 13, 13, 13 tiles in hands, respectively). Permutation of tiles in one’s hand does not make a different, while the permutation matters if not in one’s hand. The number of distinguishable configurations of 136 tiles thus can be computed as 136!(13!)3×14!×(4!)34 ∼ 10 145
Meanwhile, for each discarded tile, it is important to know whether it was discarded immediately after being drawn or not. For 70 discarded tiles, the number of possibilities is simply 270 ∼ 1021. Therefore, the lower bound of game complexity of Mahjong is estimated as
10145 × 1021 ∼ 10166.
If considering the other being simplified information, the state space could be much larger. For example, let’s consider the current points of each player. The most common rule is that each player starts with 25,000 points, with 100 points as the minimal unit (totally 1,000 units), and the game terminates if someone got negative points. So the number of possibilities can be converted to the answer of “how many ways to distribute 1000 candies to 4 kids”, which is (1000+1) (4−1)
(4−1)! ∼ 10 8.
F.2 DETAILS OF OBSERVATION SPACE AND ACTION SPACE ENCODING
In our Mahjong environment6, the action space is composed of 47 discrete actions (Table 5). Because not all actions are available at a certain step, we also provide the set of valid actions among all 47 actions at each step according to the rules, which can be used during playing and learning.
The oracle observation has the shape of 111 × 34 (executor observation is a part of oracle observation, with shape 93 × 34) (Fig. 5). The first dimension corresponds to 111 features (channels). The second dimension of observation (with size 34) corresponds to 34 Mahjong tiles (the order is Character 1-9, Dot 1-9, Bamboo 1-9, East, South, West, North, White, Green, Red). We used 1-D CNNs with convolution along the second dimension for the encoders. Suppose the current player is player 0, and other players are numbered 1, 2, 3 counter-clockwise. The value of any element in an observation is 1 or 0, explained in Table 4.
4https://en.wikipedia.org/wiki/Japanese Mahjong 5A meld is a specific pattern of three or four tiles. A player can pick up a discarded tile from others to form a meld by displaying the meld to public, if certain condition is satisfied. 6The Mahjong environment we used in this papers is available on https://github.com/pymahjong/pymahjong for reproducibility. However, we recommend to use the newer version https://github.com/Agony5757/mahjong which is better-supported by the authors and much faster.
F.3 DATA AUGMENTATION
In (Japanese) Mahjong, there are 3 suit sets of (characters, dots, bamboos) tiles, 4 wind tiles and 3 dragon tiles. The 3 suit sets are symmetric and thus exchangeable with each other7. So are the 4 wind tiles and the 3 dragon tiles. According to such symmetry, we augmented the offline RL dataset by randomly exchanging the 3 suit sets, the 4 wind tiles and the 3 dragon tiles, respectively.
7There is one exemption of winning hand pattern “all green”. Since it is an extremely rare case, we simply ignore it. | 1. What is the main contribution of the paper, and how does it relate to the problem of agents learning from state features only available during training?
2. How does the proposed approach leverage Bayesian theory, and what is the advantage of this approach?
3. Can you explain the evaluation method used in the paper, particularly the use of an oracle, and why it is suspect?
4. Why is the top-left value of Table 2 not bolded, and what does it represent?
5. How do the experiments combine too many things, and what would be a better way to evaluate the benefits of VLOG?
6. Can you clarify the relationship between X (set of executors) and X^ (set of oracles)?
7. Is there any difference between v_t in p(v_t|x_t) and V(x_t)?
8. What are "suphx-style oracle guiding" and "OPD-style oracle guiding"?
9. What is the "(trained) baseline model" used in Table 1?
10. Are there any minor comments or suggestions for improving the paper's exposition? | Summary Of The Paper
Review | Summary Of The Paper
This paper presents an approach to dealing with problems where agents could benefit from learning from state features that are only available during training, but not during evaluation. Their approach leverages Bayesian theory to propose a variational learning method and is evaluated on a variety of tasks, including a Mahjong simulation (which the authors are providing as part of their contribution).
Review
I really like the motivation and idea behind this paper, and I think it is something that future researchers could make use of. In particular, I like the simplicity of the approach.
My main concern with the paper is that the evaluation on Mahjong:
It is somewhat confusing to follow (for readers not already familiar with Mahjong), and it is not clear what the takeaway is (other than VLOG seems to work).
The results are somewhat suspect, since the oracle does the worst in Table 2, which is not what one would expect.
The method for scoring described at the bottom of page 8 ("we simply added up the payoff of the two being tested agents") seems quite strange and not really a fair evaluation. Is this really how Mahjong is played/evaluated?
Why isn't the top-left value of table 2 bolded as well? The VLOG line in Table 2 has overlapping CIs, so should not be bolded.
Finally, the experiments are combining too many things: offline RL, multi-agent RL, etc. This makes it harder to reliably ascertain what the benefits of VLOG are. While interesting, I'd recommend clarifying the exposition of the experiments, or finding another (better known) benchmark, and leave the Mahjong experiments as an extra set of appendix experiments.
Questions for the authors:
In footnote 2 in page 3, is there any relationship between
X
(set of executors) and
X
^
(set of oracles)? I assume the latter is a subset of the full
X
?
In remark 1, is the
v
t
in
p
(
v
t
|
x
t
)
equal to
V
(
x
t
)
?
In the last paragraph of section 5.2 it says "VLOG-no oracle performed surprisingly well", but this is not the case in the figures shown.
What is "suphx-style oracle guiding"?
What is "OPD-style oracle guiding"
What is the "(trained) baseline model" used in Table 1?
Minor comments:
The abstract starts with "How to make intelligent decisions is a central problem in machine learning and cognitive science". Minor nit, but I'd say cognitive science is more concerned with understanding how decisions are made.
In the abstract should say "RL using variational methods."
In the abstract should say "decision-making tasks ranging from video games..."
At the end of the second paragraph of the introduction, it's not necessary to include "etc" since you started the sentence with "including".
In section 2, remove "different" after "Another" in the second sentence.
In section 2 should say "make the student policy behave more simliar to the teacher policy. Both methods"
Title of section 3 should be "PRELIMINARIES"
In the first sentence of section 4 should say "variable representing the environmental state"
In the first sentence of section 5.1, should say "by leveraging oracle observations in learning."
In the end of the first paragraph of 5.1, should say "another grid if not blocked by a wall."
In the first sentence of 5.2 should say "we tested it on a set of MinAtar video games"
When citing the ALE, you should cite Bellemare et al, 2013 instead of Machado et al.
In the third paragraph of page 8, should say "are sophisticated ways to encode the state and action space of Mahjong (Li et al., 2020), we attempt to make simplifications with reasonable amounts of approximations since our goal is not to create strong Mahjong AI" |
ICLR | Title
Variational oracle guiding for reinforcement learning
Abstract
How to make intelligent decisions is a central problem in machine learning and artificial intelligence. Despite recent successes of deep reinforcement learning (RL) in various decision making problems, an important but under-explored aspect is how to leverage oracle observation (the information that is invisible during online decision making, but is available during offline training) to facilitate learning. For example, human experts will look at the replay after a Poker game, in which they can check the opponents’ hands to improve their estimation of the opponents’ hands from the visible information during playing. In this work, we study such problems based on Bayesian theory and derive an objective to leverage oracle observation in RL using variational methods. Our key contribution is to propose a general learning framework referred to as variational latent oracle guiding (VLOG) for DRL. VLOG is featured with preferable properties such as its robust and promising performance and its versatility to incorporate with any value-based DRL algorithm. We empirically demonstrate the effectiveness of VLOG in online and offline RL domains with tasks ranging from video games to a challenging tilebased game Mahjong. Furthermore, we publish the Mahjong environment and an offline RL dataset as a benchmark to facilitate future research on oracle guiding1.
1 INTRODUCTION
Deep reinforcement learning (DRL) has undergone rapid development in recent years (Sutton & Barto, 2018; Mnih et al., 2015; Vinyals et al., 2019). However, there is a common and important but under-explored aspect in RL: imagine that after playing a Poker game, a human player may look at the replay to check opponents’ hands and analyze this information to improve his/her playing strategy (or policy) for the next time. We refer to the information like opponents’ hands as oracle observation, defined as the information invisible to the agent during online task execution but available during offline training. By contrast, the information available during task execution is called executor observation. Such a scenario has been referred to as oracle guiding for RL (Li et al., 2020; Fang et al., 2021) (see Sec. 3 for a formal definition). Oracle guiding is common in real life. For example, when taking an examination (the oracle observation is the answers to similar questions, which are available only during preparing); and training a robot to perform some tasks on the Moon (when training the robot, we can provide it with the information of the territory, which are not available during execution). The type of oracle observation can be diverse, including hindsight information (Harutyunyan et al., 2019; Guez et al., 2020), human feedback (Knox & Stone, 2009; Loftin et al., 2016; MacGlashan et al., 2017), re-calibrated data with post-processing, and hidden states in a partially observed setting (Li et al., 2020).
While humans naturally perform oracle guiding when learning to make decisions, it remains challenging in RL. The difficulties include: (1) how to guarantee that learning with oracle observation
∗Work done during an internship in Microsoft Research Asia. Email: dongqi.han@oist.jp 1https://github.com/Agony5757/mahjong
improves the main decision model using executor observation only, and (2) if introducing an auxiliary loss leveraging oracle observation, how to tradeoff between the main loss and auxiliary loss. While recent studies attempted to model oracle guiding in RL (Guez et al., 2020; Li et al., 2020; Fang et al., 2021), none of them addressed these difficulties (refer to the Related Work section for more details). In particular, all these proposed methods are heuristic: although empirical results showed performance gain with oracle guiding, it is not theoretically guaranteed that the usage of oracle observation improves execution performance.
In this paper, we propose a fundamentally new idea for oracle guiding based on Bayesian theory. Taking Poker as an example, we know that the learning of optimal strategy is tractable if knowing the global, true state of the environment (or simply state2), including all visible or invisible cards, the opponents’ playing style, etc. (Azar et al., 2017; Jin et al., 2018; 2020). A key part of skill improvement is learning to estimate the probabilistic distribution of environmental state from executor observation. The common way to do this by human experts is to watch match replays where the oracle observation (e.g. opponents’ hands) is available, and then use the oracle-estimated state to correct the executor-estimated state. We interpret this to Bayesian language: executor-estimated state as the prior distribution, and the oracle-estimated one as the posterior distribution. Thus, the training objective can be considered two-fold: learning to make decisions based on posterior estimation of state, and learning a prior distribution of state closer to the posterior one.
We formulate this idea by proposing a novel learning framework for general oracle guiding problems based on variational Bayes (VB) (Kingma & Welling, 2014), referred to as variational latent oracle guiding (VLOG). VLOG owns several preferable properties. First, VLOG is theoretically guaranteed to leverage oracle observation for improving the decision model using executor observation. Second, VLOG is a versatile DRL framework that can be integrated into any value-based RL algorithm and is agnostic to the type of oracle observation. Third, VLOG does not suffer from the necessity of tuning additional hyper-parameters. Finally, we empirically show that VLOG contributes to better performance in a variety of decision-making tasks in both online and offline RL domains. The tasks include a simple maze navigation, video games playing, and a particularly challenging tile-based game Mahjong in which humans heavily leverage oracle observation in learning (Li et al., 2020). We also contribute to the community by taking Mahjong as a benchmarking task for oracle guiding and publishing the RL environment and dataset to facilitate future research.
2 RELATED WORK
In the past few years, research interests have grown on DRL and imitating learning (Chen et al., 2020) with leveraging oracle or hindsight information. For DRL, Guez et al. (2020); Fang et al. (2021) considered hindsight observation (executor observation at future steps) as the oracle observation during training. Guez et al. (2020) used hindsight observation to facilitate learning a representation of current state. Another method (Fang et al., 2021) was used for stock trading: the authors trained a teacher (oracle) policy with hindsight information, and employed network distillation to make the student policy behaves more similar to the teacher policy. Both the methods (Guez et al., 2020; Fang et al., 2021) are heuristic and focused on making use of future observation for a better sequential modeling, while VLOG is theoretically guaranteed for any kind of oracle observation. For applications in imperfect-information games, Suphx (Li et al., 2020), a DRL-based AI for Mahjong, also introduced a method to leverage oracle observation (opponents’ hand) for stronger performance. They concatenate oracle observation with executor observation as the input of policy network, where the oracle observation is timed by a scalar variable which is annealed from 1 to 0 during the training course. However, the method used in Li et al. (2020) is also heuristic and has only been tested in one task.
Variational Bayes (VB) is a well-established method and has been taken advantage of in RL. For example, control as probabilistic inference uses VB to connect the objective function of RL and variational lower bound of a probabilistic inference problem (Furmston & Barber, 2010; Weber et al., 2015; Levine, 2018). Our idea differs since it frames a value regression problem by the maximumlikelihood problem, and then, it applies VB to solve it (see Sec. 4). Also, the usage of VBian network models for DRL has captured attention from researchers recently. For example, Ha & Schmidhuber (2018) proposed to employ a VAE to reduce the high dimensionality of image observation; Igl et al.
2Generally, oracle observation does not necessarily contain all the information of environmental state.
(2018); Han et al. (2020); Lee et al. (2020) proposed variational RNNs as the state-transition models to encode the belief states of the agent; Yin et al. (2021) utilized a variational sequential generative model for predicting future observation and used the prediction error to infer intrinsic reward to encourage exploration; and Okada et al. (2020) demonstrated performance gain by using a deep Bayesian planning model in continuous control tasks. Our study differs from the mentioned works by focusing on oracle guiding, and VLOG does not involve learning a state transition model.
3 ORACLE GUIDING POMDP
Here we define the problem based on the Partially Observable Markov decision process (POMDP) (Sondik, 1978). An oracle guiding POMDP distinguishes from the original POMDP by having two types of observations: executor and oracle. The executor observation x is always available to the agent, on which the agent’s decision making (execution) relies. The oracle observation x̂ is not accessible during execution, but can be obtained afterward. x is included in x̂ since the former is always available. Thus x̂ contains no less information of the underlying environment state than x.
A formal definition of oracle guiding POMDP is a tuple 〈S,A,P0, T ,X, X̂,O, Ô, γ〉, where S and A are the state and action spaces, respectively. P0 specifies the initial state distribution such that P0(s) is the probability of a state s ∈ S being an initial state. T specifies the state transition probability such that T (s′, r|s, a) is the probability of reaching to a new state s′ ∈ S with an immediate reward r ∈ R after taking an action a ∈ A at a state s ∈ S. X denotes the executor observation space and X̂ denotes the oracle observation space. O specifies the executor observation probability such that O(x|s) is the probability of an executor observation x ∈ X at a state s ∈ S. Similarly, Ô specifies the oracle observation probability such that Ô(x|s) is the probability of an oracle observation x̂ ∈ X̂ at a state s ∈ S. γ ∈ [0, 1) is the discount factor. Value functions are the expected value of return from a particular state or state-action pair. For a policy π, its Q-function is defined by qπ(s, a) := Eπ[ ∑∞ n=0 γ
nrt+n|st = s, at = a], where Eπ indicates the expectation when the policy π is followed. The state value function vπ is defined by vπ(s) := Eπ(a|s)[qπ(s, a)]. The agent aims to maximize EP0(s)vπ(s) with respect to π, and the value-functions play an important role to this end (Sutton & Barto, 2018).
4 VLOG: VARATIONAL LATENT ORACLE GUIDING
Let us introduce a latent vector zt, which is a probabilistic variable representing the environmental state st. From a Bayesian perspective, we consider the prior distribution p(zt|xt) as the agent’s estimated probability density function (PDF) for zt based on executor observation xt. Meanwhile, the posterior distribution PDF q(zt|x̂t) is modeled based on oracle observation x̂t. In RL, the most basic requirement is to make a good estimation of return by a value function approximator v(xt) := ∫ v(zt)p(zt|xt)dzt (we denote it by v, but it can also be a Q function by simply replacing xt with (xt,at)) based on available information, i.e. executor observation xt. The target of return, denoted by vtart , can be estimated by any value learning algorithm such as TD(0) (Sutton & Barto, 2018), Peng’s Q(λ) (Kozuno et al., 2021), etc. (generally, vtar can always be given by Bellman equations. However, one can also use Monte-Carlo return as vtar if available). In particular, we employed double Q-learning with dueling architecture (Wang et al., 2016) to compute vtar due to its effectiveness and simplicity (Sec. 5 and B.1). We want to maximize the log-likelihood objective of the estimation of return based on executor observation xt (i.e., for the executor model)
L := logP ( v(xt) = v tar t |xt ) = log ∫ zt P ( v(zt) = v tar t |zt ) p(zt|xt)dzt
= log ∫ zt q(zt|x̂t) q(zt|x̂t) P ( v(zt) = v tar t |zt ) p(zt|xt)dzt.
By Jensen’s inequality, we have L ≥ ∫ zt q(zt|x̂t) log P (v(zt) = vtart |zt) p(zt|xt) q(zt|x̂t) dzt
= ∫ zt [ q(zt|x̂t) logP ( v(zt) = v tar t |zt ) − q(zt|x̂t) log q(zt|x̂t) p(zt|xt) ] dzt
= Eq(zt|x̂t) [ logP ( v(zt) = v tar t |zt )]︸ ︷︷ ︸ oracle prediction error −DKL (q (zt|x̂t)‖p (zt|xt))︸ ︷︷ ︸ regularization term := LVLOG. (1)
Thus we can maximize our initial objectiveL via maximizingLVLOG, which is also known as the variational lower bound (Kingma & Welling, 2014), but in our oracle-guiding scheme. Since p(zt|xt) and q(zt|x̂t) represents the PDF of the latent vector obtained from executor observation and oracle observation, respectively, the meanings of the two terms in LVLOG appear clear now: the first term, i.e., oracle prediction error, helps to improve value estimation from posterior latent state distribution (zt computed with oracle observation); and the second term, i.e., regularization term, helps to shape the prior representation of zt closer to the posterior one as latent oracle guiding. We would like to highlight that the VLOG objective is the lower bound of the objective for the prior executor model v(xt) (the estimation of return using executor observation xt) with the usage of oracle observation x̂t. This lower bound guarantees that the usage of oracle observation facilitates the learning of the executor model, which is our original motivation. Remark 1. One may use any shape of the approximate posterior q, depending on which different instance of VLOG is possible. Furthermore, one may directly use v(xt) instead of p(zt|xt). These design choices allow users to incorporate any prior knowledge on oracle observation. For example, if one knows the range of a state-value at xt is a closed interval [l, u], the approximate posterior q(vt|x̂t) can be restricted to a family of probability distributions supported on [l, u].
4.1 IMPLEMENTATION WITH NEURAL NETWORKS
Inspired by the implementation of variational auto-encoder (VAE, Kingma & Welling (2014)), we propose the neural network architecture of VLOG (Fig. 1). The executor observation xt and oracle observation x̂t are processed by two distinct encoder networks to compute the prior and posterior distribution of the latent vector, respectively. During training, both xt and x̂t are available, and all the network parameters are updated by maximizing the VLOG objective in an end-to-end manner (Fig. 1A). During execution, the agent computes prior distribution p(z|xt) for decision making (Fig. 1B) without using oracle observation. zt is computed by parameterized normal distribution
p(zt|xt) = N (µpt , exp (logσ p t )), (µ p t , logσ p t ) = prior encoder(xt), q(zt|x̂t) = N (µqt , exp (logσ q t )), (µ q t , logσ q t ) = posterior encoder(x̂t).
For computing P(v(zt) = vtart |zt) in Eq. 1, we simply assume it follows normal distribution, and estimate it with mean square error between v(zt) and vtart in practice. The reparameterization trick is used to perform end-to-end training as in VAE (Kingma & Welling, 2014). Then the output of the decoder (value function) can be obtained by v(zt) = decoder(zt). Note that zt is obtained using the posterior encoder during training, and using the prior encoder during execution (Fig. 1A, B).
4.2 TASK-AGNOSTIC LATENT BOTTLENECK CONTROL
To learn a better representation, we borrow the idea from β-VAE (Higgins et al., 2016) to multiply a coefficient β to the regularization term. Thus we have the loss function (negative lower bound) as
J βVLOG = −Eq(zt|x̂t) [ logP ( v(zt) = v tar t |zt )] + βDKL (q (zt|x̂t)‖p (zt|xt)) . (2)
The hyper-parameter β controls the capacity of the latent information bottleneck (Tishby & Zaslavsky, 2015; Alemi et al., 2017). We found the choice of β is important for the performance of VLOG in RL (see Appendix B.3). However, having extra hyper-parameters are not desired. Inspired by the method used in Burgess et al. (2017) for controlling the scale of KL divergence in β-VAE, we propose a task-agnostic method to automatically adjust β by setting a target of KL divergence DtarKL. In particular, we minimize the auxiliary loss function (β as the being optimized parameter)
Jβ = (log10DtarKL − log10DKL (q (zt|x̂t)‖p (zt|xt))) log(β). (3)
The intuition here is to strengthen the regularization by increasing β when the divergence between prior and posterior is too large, and vice versa. This method is similar to that used in soft actor-critic for automatically adjusting the entropy coefficient (Haarnoja et al., 2019), while we used this for KL divergence coefficient. Importantly, we found a well-performing value DtarKL = 50 agnostic to other design choices. It worked well in a range of different tasks and networks (Sec. 5). Therefore, we do not need to tune β. We provide more discussion about this method in Appendix D.
5 EXPERIMENTS
How does VLOG perform in practice? We investigated the empirical performance of VLOG in three types of tasks using online or offline RL, from simple to difficult. In the following experiments, we used double DQN with dueling network architecture (van Hasselt et al., 2016; Wang et al., 2016) as the base RL algorithm, the model and loss functions for RL are defined in Appendix. B.1. As DRL is susceptible to the choice of hyper-parameters, introducing any new hyper-parameters might obscure the effect of oracle guiding. Double DQN and dueling architecture are preferable for the base algorithm since they require no additional hyper-parameters, in contrast to other DQN variants (Hessel et al., 2018), such as prioritized experience replay (Schaul et al., 2016), noisy network (Fortunato et al., 2018), categorical DQN (Bellemare et al., 2017), and distributed RL (Kapturowski et al., 2018). Importantly, we used the same hyper-parameter setting for all methods and environments as much as possible (see Appendix B.2).
5.1 MAZE
We first demonstrate how VLOG helps to shape the latent representation by leveraging oracle observation in learning. The testbed is a maze navigation task3 (Fig. 2A) with 10×10 grids. The executor observation is the (x, y) position, where x, y are continuous values randomly sampled within each grid of the maze (thus the observations in two adjacent but wall-separated grids may be very close). At each step, the agent selects an action (going up, down, right, or left) and moves to another grid if not blocked by wall. We provided the VLOG agent with oracle observation (xc, yc, dg) during training, where xc, yc are coordinates of the center of the current grid, and dg is the (shortest) path distance to goal from the current grid. It is intuitive that although the raw observation is (x, y), dg matters more in a maze navigation task. We empirically investigated how much such oracle observation used in learning of VLOG could help to shape the latent representation of z w.r.t. dg rather than position. The encoder and decoder were both 2-layers multi-layer perceptrons (MLP) with width 256 and ReLU activation. The size of latent vector zt for VLOG was 128 since we computed both µ and σ (Appendix C).
Experiments show that the baseline agent struggled to reach the goal (Fig. 2B), while VLOG agents stably solved the task after learning. To check how the usage of VLOG affected the learned latent representation, we visualize the latent representation of both VLOG and baseline models with principal component analysis (PCA, Pearson F.R.S. (1901)). In Fig. 2C, we map path distance to goal dg to color and plot the scores of the first 2 PCs of z (computed using executor observation)
3https://github.com/MattChanTK/gym-maze
for VLOG and the corresponding latent state for baseline using successful trials (“latent layer” in Fig. 1C). The latent state of VLOG showed a relatively smoother and more interpretable representation of distance-to-goal comparing to that of baseline. We then plot the latent representations for different positions in the maze in Fig. 2D. The latent state of VLOG more clearly represented dg , consistent with the result (Fig. 2C). In particular, we looked into a rectangle region (denoted by rectangles in Fig. 2D) inside which the left 2 grids and right 2 grids are segregated by a wall. We found the corresponding areas in the latent PC space and circled them in Fig. 2C. While these 4 grids are close in (x, y) (executor observation), their distances-to-goal (oracle observation) are highly distinguished. By leveraging oracle guiding using VLOG, the agents can clearly differentiate the left 2 grids and the right 2 grids in latent space as shown in Fig. 2C, D, left (note that the latent state z here of VLOG was computed using executor observation only). By contrast, the latent representations of these grids were overlapped for the baseline model, which did not utilize the oracle observation (Fig. 2C, D, right). In sum, we demonstrated with a toy example that VLOG effectively helped the latent space to couple with oracle state useful for the task. The following sections will transfer to experiments on more complicated tasks and discuss how VLOG can improve practical performance.
5.2 NOISY MINATAR
To evaluate how VLOG scales with more high-dimensional state space, we tested it on a set of MinAtar videos games. MinAtar (Young & Tian, 2019) is a test platform for AI agents, which implements 5 miniaturized Atari 2600 games with discrete actions (Seaquest, Breakout, Space Invaders, Freeway and Asterix). MinAtar is inspired by Arcade Learning Environment (Bellemare et al., 2013) but simplifies the environments for efficiency. The observation is 10×10 pixels with multiple channels indicating different objects. In real-world, the observation usually contains some noise. Thus it is natural to consider the noisy observation as the partially-observable executor observation, and the original, non-noisy observation as the oracle one. Suppose that at each frame, each pixel may “break” randomly with an independent probability of 1/8 (Fig. 3A). The original observation at a broken pixel is erased and is replaced by a different value in all channels. We consider such noisy MinAtar environments with the noisy pixels as the executor observation and the original pixels as the oracle observation.
The network structure was the same as that for Maze, but the encoder was replaced by a CNN (Appendix C). We ran experiments on all the 5 environments of MinAtar with VLOG as well as
baseline, oracle and alternative oracle guiding methods (see Appendix A for details). Baseline model always uses executor observation as network input (Fig. 1C). Oracle is the same as baseline except that it always receives oracle observation (i.e., cheating, we ran the experiments of oracle for reference). VLOG-no oracle is an ablation of VLOG where we use the executor observation as the input to posterior encoder in VLOG (oracle observation is not used). Suphx-style oracle guiding is the oracle guiding method used in the Mahjong AI Suphx (Li et al., 2020), in which the executor observation and the dropout-ed oracle observation (with dropout probability pdropout) are concatenated as the input to the network. With training proceeds, pdropout is gradually increased from 0 to 1, thus the trained network does not need oracle observation for input (Appendix A). OPD-style oracle guiding is the oracle guiding method used in oracle policy distillation (OPD) (Fang et al., 2021). OPD-style oracle guiding first trains a teacher model using oracle observation as input, and then trains the executor model using an auxiliary loss, which is the error between the executor and the teacher models’ estimation for value function (Appendix A).
The results show that oracle performed usually the best, as expected. We normalized the performance of non-oracle models using oracle model as reference, for more clear comparison (Fig. 3B). Among all the oracle guiding methods (VLOG, OPD-style and Suphx-style), VLOG consistently performed the best. It is notable that VLOG and VLOG-no oracle performed surprisingly well in Seaquest. This can be explained by that Seaquest is a task with a local optimum (see Appendix E), while the stochasticity in hidden states of VLOG helped exploration in latent space to escape from the local optimum (a similar idea is Fortunato et al. (2018), but their noise was added to the network weights). Except in Seaquest, VLOG-no oracle did not show significant performance different with baseline, showing that the performance gain of VLOG in this task set mainly came from leveraging oracle observation for shaping latent distribution; and the usage of variational Bayesian model was, at least not harming the performance when there is no helpful oracle information.
5.3 OFFLINE LEARNING ON MAHJONG
Mahjong is a popular tile-based game with hundreds of millions of players worldwide (here we consider the Japanese variant). The game is like many other card games (but using tiles instead of cards), in which multiple (usually four) players draw and discard tiles (totally 136 tiles) alternatively to satisfy winning conditions. It is a highly challenging game characterized with (1) imperfect information in executor observation (a player cannot see opponents’ private tiles and the remaining tiles to be drawn), (2) stochastic state transitions as in many card games, and (3) extremely high game complexity (i.e., the number of distinguished, legal game states). Complexity of Mahjong is much larger than 10166 (Appendix F.1). For reference, complexity of Go is ∼ 10172 (Silver et al., 2016) and complexity of no-limit Poker is ∼ 10162 (Johanson, 2013). In Mahjong, it is hard to make optimal decisions based on executor observation because the outcomes heavily depend on invisible information, and complexity of invisible state space is as high as 1048 on average (Li et al., 2020). In response to this challenge, Li et al. (2020) introduced suphx-style oracle guiding and demonstrated a performance gain. Thus, we consider Mahjong as a promising test platform for oracle guiding methods. Since the number of possible states in Mahjong is extremely large. It is costly to explore with random actions in an online RL manner, and no pre-
vious work could train a strong Mahjong AI with purely online RL. Also, we would like to examine the effectiveness of VLOG in offline RL settings. For these reasons, we transferred to offline RL (Levine et al., 2020) for the Mahjong task using expert demonstrations.
We processed about 23M steps of human experts’ plays from the online Mahjong game platform Tenhou (https://tenhou.net/mjlog.html) to a dataset for offline RL (data were augmented using the symmetry in Mahjong, see Appendix F). Also, we created a simulator of Mahjong as the testing environment. Though there are sophisticated ways to encodes the state and action space of Mahjong (Li et al., 2020), we attempts to make simplifications with reasonable amounts of approximations since our goal is to not create an strong Mahjong AI, but to use Mahjong as a platform to study oracle guiding problems. In our case, the action space is composed of 47 discrete actions covering all decisions in Mahjong. An executor observation is a matrix encoding public information and the current player’s private hand; An oracle observation concatenates the executor observation with the information of the opponents’ private hands. (see Appendix F). We used an 1-D CNN as the encoder as done in Mahjong AIs commonly (Li et al., 2020), and the size of zt and the decoder network width was increased to 512 and 1024, respectively (Appendix C).
Note that although Mahjong is a 4-player game, using offline RL data to train an agent does not involve multi-agent RL (Zhang et al., 2021) because the offline dataset is fixed: the opponents have fixed policies and thus can be considered as parts of the environment. Our experiments focused on single-agent RL to avoid the complexity caused by considering multi-agent RL. We investigated two kinds of offline RL settings. The first is conservative Q-learning (CQL) (Kumar et al., 2020). Our CQL setting differed from the online RL setting in previous sections by adding auxiliary CQL loss (Kumar et al., 2020) to the Q-learning loss function (Appendix B.1.2). The other is behavior cloning (BC). Although VLOG was designed for value-based RL, we could straightforwardly incorporate VLOG with BC by letting the network predict action instead Q function. The learning was conducted by minimizing the cross entropy between the output and the target action (demonstration) as in a classification problem. Note that we did not test OPD-style oracle guiding in BC setting because it was equal to the baseline since we can directly use demonstration actions as oracle policy for distillation.
Because Mahjong is a zero-sum, four-player game, we tested the performance of trained models in two scenarios: playing with the (trained) baseline model (Table 1) and playing with each other (fight each other, Table 2). In the first scenario, four agents were playing the same matches on the game table, where two of them were being tested agents and the other two were the baseline models. Although each agent played for itself and there was no communication between players, we simply added up the payoff of the two being tested agents, and consider they won a match if one of them ranked top (so the match win rate will be 50% if equally strong as baseline), for statistics (Table 1).
For CQL, the results (Table 1 left and 2 upper) show that VLOG substantially outperformed the baseline and alternative methods (because Mahjong is a highly random game, 55.7% match win rate indicates a large skill gap). Interestingly, VLOG was even comparable to oracle. This can be explained by that VLOG also benefited from its Bayesian property, which is consistent with that VLOG-no oracle showed a significant performance gain over the baseline model (Table 1 left). Still, the oracle model learned to reduce deal-ins (i.e., player discards a tile and another player wins the game by picking up this tile to compose a winning hand) since it could explicitly see the opponents’ private tiles, showing a much lower deal-in rate than other non-cheating models (Table 2 upper).
In BC setting, the agents did not learn a value function, but tried to predict human experts’ actions. Therefore, the training procedure did not involve reasoning the relationship between playing outcome and oracle observation, but just imitating human behaviors. This can be seen from the results that oracle did not substantially outperform baseline in BC (Table 1 right and 2 lower). However, VLOG and VLOG-no oracle still showed performance gain, thanks to the stochastic modeling.
6 SUMMARY
We have proposed VLOG – a variational Bayesian learning framework for leveraging oracle observation to facilitate DRL especially in partially observable environments. VLOG is available for any RL problem in which there is oracle observation that may help the executor to make decisions.
We first introduced a latent vector z to represent the environmental state. The prior and posterior distribution of z is modeled using executor and oracle observation, respectively. Then, we derived a variational lower bound (Eq. 2) by maximizing which we can optimize the executor model using the oracle observation. We developed the corresponding methodology for DRL, which can be incorporated with most RL algorithms that need to estimate a value function.
If oracle observation contains more information to retrieval the true environmental state (or, it is the true environmental state), VLOG’s oracle guiding in latent space helps to shape a latent representation in neural networks closer to the true one. We demonstrated this advantage of VLOG using the maze task. Then, we scaled VLOG up to solve image-based videos games, and compared it with alternative oracle-guiding methods. Though all oracle-guiding methods showed performance gain over the baseline model, VLOG performed consistently the best. Finally, we transferred to offline RL domain using a challenging tile-based game Mahjong in which an executor plays with hidden information and random state transitions, and observed VLOG achieved best overall performance.
We also conducted an ablation study of VLOG (VLOG-no oracle) in which posterior model did not receive oracle observation, but executor one. VLOG-no oracle demonstrated performance gain in the tasks that may benefit from the stochasticity; otherwise, it performs similar to the deterministic baseline. This clarified that the source of VLOG’s promising performance is two-fold: oracle guiding and stochastic modeling. Finally, we publish the dataset of Mahjong for offline RL and the corresponding RL environment so as to facilitate future research on oracle guiding.
ACKNOWLEDGEMENT
This work was supported by Microsoft Research Asia. Kenji Doya was supported by Japan Society for the Promotion of Science KAKENHI Grant Numbers JP16K21738, JP16H06561 and JP16H06563, as well as by Okinawa Institute of Science and Technology.
REPRODUCIBILITY STATEMENT
The source code of VLOG can be found in Supplementary Material.
ETHICS STATEMENT
We declare no conflict of interests. We tried to use friendly colors to people with color recognition disabilities (Fig. 2C, D) and distinguishable markers for performance curves of different models (Fig. 2B, Fig. 3 and Fig6). Our Mahjong dataset was generated using downloadable, public game replay data from Tenhou.net with post-processing. The dataset contains no private information about players. Since VLOG is a general framework for leveraging the oracle information, we cannot foresee any direct application of VLOG to malicious purposes. However, any new RL algorithm might confer increased autonomy on an agent, and eventually lead to a completely autonomous agent, which can be used for malicious purposes, e.g., fully autonomous soldiers.
B RL ALGORITHMS AND HYPER-PARAMETERS
B.1 RL ALGORITHMS
B.1.1 DUELING DOUBLE DQN FOR MAZE AND MINATAR TASKS
As we discussed in Sec. 5, we used double DQN with dueling network architecture (van Hasselt et al., 2016; Wang et al., 2016) as the base RL algorithm, because it work relatively well (Hessel et al., 2018) without introducing additional hyper-parameter.
The Dueling architecture of DQN (Wang et al., 2016) is defined as follows (see Appendix C for hidden layer size):
c l a s s DuelingQNetwork ( nn . Module ) : def i n i t ( s e l f , i n p u t s i z e , ac t ion num , h i d d e n l a y e r s ) :
super ( DuelingQNetwork , s e l f ) . i n i t ( ) s e l f . i n p u t s i z e = i n p u t s i z e s e l f . a c t i o n n u m = a c t i o n n u m s e l f . h i d d e n l a y e r s = h i d d e n l a y e r s s e l f . ne twork modu le s = nn . Modu leL i s t ( ) l a s t l a y e r s i z e = i n p u t s i z e f o r l a y e r s i z e in h i d d e n l a y e r s :
s e l f . ne twork modu le s . append ( nn . L i n e a r ( l a s t l a y e r s i z e , l a y e r s i z e ) ) s e l f . ne twork modu le s . append ( nn . ReLU ( ) ) l a s t l a y e r s i z e = l a y e r s i z e
s e l f . v a l u e l a y e r = nn . L i n e a r ( l a s t l a y e r s i z e , 1 ) s e l f . a d v a n t a g e l a y e r = nn . L i n e a r ( l a s t l a y e r s i z e , a c t i o n n u m ) s e l f . ma in ne twork = nn . S e q u e n t i a l (* s e l f . ne twork modu le s )
def f o r w a r d ( s e l f , x ) : h = s e l f . ma in ne twork ( x ) v = s e l f . v a l u e l a y e r ( h ) . r e p e a t i n t e r l e a v e ( s e l f . ac t ion num , dim = −1) q0 = s e l f . a d v a n t a g e l a y e r ( h ) a = q0 − t o r c h . mean ( q0 , dim = −1 , keepdim=True ) . r e p e a t i n t e r l e a v e (
s e l f . ac t ion num , dim = −1) q = v + a re turn q
Double deep Q-learning (van Hasselt et al., 2016) was used to compute Qtarget in Fig. 1 A (one can use any other algorithm to compute Qtarget without changing other parts). In particular, as in Wang et al. (2016), we have
Qtargett = rt + γQ(zt, argmax a′ Q(zt+1, a ′; θ); θ−),
where rt is the reward at step t, γ is the discount factor (Table 3), θ denotes the parameters of the Q network (MLP decoder) for computing the Q-function (Appendix C). Note that z is given by the posterior decoder with oracle observation x as input, since the oracle prediction error term Eq(zt|x̂t) [logP (v(zt) = vtart |zt)] in Eq. 1 is the expectation over the posterior distribution q(z|x̂). Following deep RL normals (Mnih et al., 2015; Wang et al., 2016; van Hasselt et al., 2016), we used a target Q network with the same structure as the original Q network, of which the parameters are denoted as θ− (Table 3). Every 1,000 steps, the target Q network copies the parameters from the original Q network (Table 3). Then the first term of the VLOG loss function (Eq. 2) is simply given by the mean square error between Qtarget and the output of Q network (MLP decoder) for the Maze and MinAtar tasks.
B.1.2 DUELING DOUBLE DQN WITH CONSERVATIVE Q LEARNING FOR MAHJONG
In Mahjong, as we transfer to offline RL domain (Sec. 5.3), directly using off-policy RL algorithm usually results to very unsatisfying performance (Levine et al., 2020).
Therefore, we complement the loss function of VLOG (Eq. 2) with an auxiliary conservative Qlearning (CQL) loss (Kumar et al., 2020),
JCQL = αEx̂,a∼D [ log
∑ a′∈A exp(Q(z(x̂), a′, θ))− [Q(z(x̂), a, θ)]
] ,
where D is the offline dataset we used for Mahjong and α = 1. The combined loss function used for Mahjong (CQL) is J βVLOG + JCQL.
B.2 HYPER-PARAMETER SELECTION
We summarize the hyper-parameters in Table 3.
B.3 SENSITIVITY ANALYSIS FOR β
As we mentioned in Sec. 4.2, the coefficient β is important to the learning of VLOG agents. If we fix the value of β throughout training, a too large or too small β will result in worse performance. The corresponding results are shown in Fig. 6.
C NETWORK STRUCTURES
For simplicity, we use the same hidden layer size for all the fully connected layers, where the hidden layer size is 256 for maze and MinAtar, and 1024 for Mahjong.
C.1 ENCODER
Since we targeted at various tasks, we used different encoder network for each type of environment. The prior and posterior encoder has the same structure except different sizes of input features/channels.
For the maze task, the encoder was a 2-layers MLP with ReLU activation. The output size is also equal to hidden layer size.
For the MinAtar tasks, we used 2-D CNN encoder defined as follows:
import t o r c h . nn as nn c n n m o d u l e l i s t = nn . Modu leL i s t ( ) c n n m o d u l e l i s t . append ( nn . Conv2d ( n c h a n n e l s , 16 , 3 , 1 , 0 ) ) c n n m o d u l e l i s t . append ( nn . ReLU ( ) ) c n n m o d u l e l i s t . append ( nn . Conv2d ( 1 6 , 32 , 3 , 1 , 0 ) ) c n n m o d u l e l i s t . append ( nn . ReLU ( ) ) c n n m o d u l e l i s t . append ( nn . Conv2d ( 3 2 , 128 , 4 , 2 , 0 ) ) c n n m o d u l e l i s t . append ( nn . ReLU ( ) ) c n n m o d u l e l i s t . append ( nn . Conv2d ( 1 2 8 , 256 , 2 , 1 , 0 ) ) c n n m o d u l e l i s t . append ( nn . ReLU ( ) ) c n n m o d u l e l i s t . append ( nn . F l a t t e n ( ) ) c n n m i n a t a r = nn . S e q u e n t i a l (* c n n m o d u l e l i s t )
where n channels is the number of channels of executor or oracle observation. The output size is equal to hidden layer size.
For Mahjong, because the second dimension of observation (tile ID) has local contextual relationship (Li et al., 2020), we used 1-D CNN (convolution along the tile ID dimension) as the encoders, defined as follows:
c n n m o d u l e l i s t = nn . Modu leL i s t ( ) c n n m o d u l e l i s t . append ( nn . Conv1d ( n c h a n n e l s , 64 , 3 , 1 , 1 ) ) c n n m o d u l e l i s t . append ( nn . Conv1d ( 6 4 , 64 , 3 , 1 , 1 ) ) c n n m o d u l e l i s t . append ( nn . ReLU ( ) ) c n n m o d u l e l i s t . append ( nn . Conv1d ( 6 4 , 64 , 3 , 1 , 1 ) ) c n n m o d u l e l i s t . append ( nn . ReLU ( ) ) c n n m o d u l e l i s t . append ( nn . Conv1d ( 6 4 , 32 , 3 , 1 , 1 ) ) c n n m o d u l e l i s t . append ( nn . ReLU ( ) ) c n n m o d u l e l i s t . append ( nn . F l a t t e n ( ) ) cnn mahjong = nn . S e q u e n t i a l (* c n n m o d u l e l i s t )
The output size is 1088, close to hidden layer size.
C.2 LATENT LAYER
For VLOG and VLOG (no-oracle), the size of z layer is half of hidden layer size because we need to estimate both the mean and the variance of z. For all the other models (baseline, oracle, OPD-style, Suphx-style), the latent layer is one fully connected layer with size hidden layer size and ReLU activation.
C.3 DECODER
The decoders for all models were 2-layers MLPs with size hidden layer size. Except BC on Mahjong, the input of decoder was the output of the latent layer concatenated with the action, and we used the dueling Q-network structure (Wang et al., 2016) to output a scalar Q value. ReLU activation was used except for the output.
For BC on Mahjong, the input of decoder was the output of the latent layer. The outputs of decoder was logit of actions, and the actions could be obtained using softmax.
D TASK-AGNOSTIC LATENT BOTTLENECK CONTROL
As we discussed in Sec. 4.2, the coefficient β of the regularization term in the VLOG loss function (Eq. 2) is adaptively regularized by Eq. 3, provided with DtarKL, which is another hyper-parameter. While our experiments demonstrated the effectiveness of this approach, the follows discuss more thoughts behind this design choice.
In principle, replacing a hyper-parameter with another does not always make training easier. However, in practice (especially deep RL), the performance will be highly sensitive to some hyperparameters (e.g., the entropy coefficient α in the original soft-actor critic algorithm (Haarnoja et al., 2018) need to be tuned in each robotic task. This is because the reward magnitude is different among tasks.). As a result, it will be beneficial to replace a sensitive hyper-parameter with another one that does not need fine tuning. For example, in the follow-up paper of soft-actor critic, the authors used adaptive entropy coefficient α by introducing another hyper-parameter, the entropy target (Haarnoja et al., 2019). They empirically found that it is good to set the entropy target equal to the negative of the degree of freedom of the agent, thus to avoid tuning α.
Our idea of replacing β with DtarKL is due to similar reasons. One obvious problem is that the magnitude of the “oracle prediction error” term (Eq. 2) relies on the reward magnitude of the task. Therefore β should also be adjusted to match with the magnitude of task reward. However, DtarKL is only relevant to the magnitude of prior z and posterior z, which does not differ too much among tasks (usually at the order of 1). In practice, we found that DtarKL = 50 works well for all the tasks including Maze, MinAtar and Mahjong.
Another way of regularizing β is to employ a linear or exponential scheduler. For example, in Burgess et al. (2017), the authors used a linear scheduler for the target KL-divergence and got good
results. However, using a scheduler introducing more hyper-parameters (at least two: initial β and final β), which is against our intention to reduce the impact of hyper-parameters.
E SEAQUEST LOCAL OPTIMUM
In Seaquest, the agent drives a submarine diving into the sea to shoot at enemies and rescue divers to earn scores. However, the submarine has limited oxygen. It must surface to replenish the oxygen before running out for surviving, by doing which it temporarily cannot earn scores. A typical local optimum is to use the last remaining oxygen for diving instead of surfacing.
F MAHJONG
F.1 ESTIMATION OF GAME COMPLEXITY OF MAHJONG
We consider 4-players Japanese Mahjong4. Although there are minor variants of rules, the following estimation applies to general cases.
For easier computation, we make two major simplifications (i.e., we estimate a lower bound of the game complexity): (1) melding from discard5 is not considered and (2) the information other than tiles, such as results of contextual games, points of players, is not considered.
There are 34 types of tiles, each with 4 duplicates (so totally 136 tiles). We further restrict our estimation to the last turn (i.e., the last tile is drawn). Among 136 tiles, 53 tiles are in someone’s hand (the 4 players have 14, 13, 13, 13 tiles in hands, respectively). Permutation of tiles in one’s hand does not make a different, while the permutation matters if not in one’s hand. The number of distinguishable configurations of 136 tiles thus can be computed as 136!(13!)3×14!×(4!)34 ∼ 10 145
Meanwhile, for each discarded tile, it is important to know whether it was discarded immediately after being drawn or not. For 70 discarded tiles, the number of possibilities is simply 270 ∼ 1021. Therefore, the lower bound of game complexity of Mahjong is estimated as
10145 × 1021 ∼ 10166.
If considering the other being simplified information, the state space could be much larger. For example, let’s consider the current points of each player. The most common rule is that each player starts with 25,000 points, with 100 points as the minimal unit (totally 1,000 units), and the game terminates if someone got negative points. So the number of possibilities can be converted to the answer of “how many ways to distribute 1000 candies to 4 kids”, which is (1000+1) (4−1)
(4−1)! ∼ 10 8.
F.2 DETAILS OF OBSERVATION SPACE AND ACTION SPACE ENCODING
In our Mahjong environment6, the action space is composed of 47 discrete actions (Table 5). Because not all actions are available at a certain step, we also provide the set of valid actions among all 47 actions at each step according to the rules, which can be used during playing and learning.
The oracle observation has the shape of 111 × 34 (executor observation is a part of oracle observation, with shape 93 × 34) (Fig. 5). The first dimension corresponds to 111 features (channels). The second dimension of observation (with size 34) corresponds to 34 Mahjong tiles (the order is Character 1-9, Dot 1-9, Bamboo 1-9, East, South, West, North, White, Green, Red). We used 1-D CNNs with convolution along the second dimension for the encoders. Suppose the current player is player 0, and other players are numbered 1, 2, 3 counter-clockwise. The value of any element in an observation is 1 or 0, explained in Table 4.
4https://en.wikipedia.org/wiki/Japanese Mahjong 5A meld is a specific pattern of three or four tiles. A player can pick up a discarded tile from others to form a meld by displaying the meld to public, if certain condition is satisfied. 6The Mahjong environment we used in this papers is available on https://github.com/pymahjong/pymahjong for reproducibility. However, we recommend to use the newer version https://github.com/Agony5757/mahjong which is better-supported by the authors and much faster.
F.3 DATA AUGMENTATION
In (Japanese) Mahjong, there are 3 suit sets of (characters, dots, bamboos) tiles, 4 wind tiles and 3 dragon tiles. The 3 suit sets are symmetric and thus exchangeable with each other7. So are the 4 wind tiles and the 3 dragon tiles. According to such symmetry, we augmented the offline RL dataset by randomly exchanging the 3 suit sets, the 4 wind tiles and the 3 dragon tiles, respectively.
7There is one exemption of winning hand pattern “all green”. Since it is an extremely rare case, we simply ignore it. | 1. What is the main contribution of the paper regarding reinforcement learning and latent state representation?
2. What are the strengths and limitations of the proposed method, particularly in its reliance on oracle observations and simplicity of the employed RL algorithm?
3. Do you have any concerns or questions regarding the formulation of ELBO and its execution paths?
4. How does the reviewer assess the novelty and relevance of the paper in comparison to other works combining variational inference and policy learning?
5. Are there any experimental settings or assumptions in the paper that seem unreasonable or unclear to you? | Summary Of The Paper
Review | Summary Of The Paper
This paper tackles reinforcement learning scenarios when the raw observation obtained from the executor is not very descriptive and there are high quality oracle observations available for the learning process to guide the policy training. The proposed method aims to derive a representative latent state via incorporating a variational inference perspective. Specifically, the oracle and raw observations are processed by two individual encoders and then the two latent spaces are fused and fed to a shared decoder. The latent distribution obtained from the oracle observations serve as the posterior distribution and that from the raw observation serves as the prior distribution. The distance two distributions is minimized via minimizing a KL divergence term.
Review
Strength
:
This paper is well written and easy to follow.
Leveraging Bayesian theory to tackle the representation learning problem in reinforcement learning is an important research direction.
Limitations
:
The proposed method is not very general as it could only be used in the tasks where oracle observations are available.
The formulation of ELBO might not be very valid (refer to the detailed comments)
The empirical evaluation domains are too simple and some of the experimental configurations are not properly specified.
The method is developed upon a very simple RL algorithm.
Detailed comments
:
The assumption that there are always oracle observations available during policy learning does not hold for many applications, such as Atari 2600 and noisy robotics navigation tasks. And in the more challenging problems with partial observability, it is unclear whether this method could be directly applied and how to effectively regularize the posterior distribution which meant to represent partially observed states.
I feel the formulation of ELBO is not that sound. I do not know why the latent of oracle observations are set as the posterior instead of that from executor observation. Also, it seems that there are two execution paths for the ELBO, i.e., there should be one for the oracle and another for the executor observation. But in Sec 4.2, this is not properly specified as Eq (2) only tackles one of the path.
It is unclear how the variational model is related to the policy model, e.g., is the input to the policy
x
t
or
z
t
?
The authors employ a very simple RL algorithm to deal with very simple tasks. For instance, in the first task of maze navigation, in such a simple case I'm very sure using something like PPO will successfully solve the problem. I don't understand why the problem could be unsolvable, i.e., the performance of baseline is such poor. I'm also concerned if the experimental settings are properly configured. For instance, in the maze task, the input is very shallow, i.e., with only ~3 dimensions, but the authors specify the latent to have a dimension of 128. I believe such setting is not very reasonable.
In many cases, even though the oracle features might be available, they might have very different format compared to the raw observations, e.g., (x, y, z) vs high-dimensional images. In such scenarios, would the two-encoder-one-decoder architecture still work?
Overall the novelty of this paper is relatively limited and there lacks discussion with a number of related works that combine variational inference with policy learning, such as: [1] Stochastic Latent Actor-Critic: Deep Reinforcement Learning with a Latent Variable Model [2] Variational methods for Reinforcement Learning [3] Sequential Generative Exploration Model for Partially Observable Reinforcement Learning [4] Reinforced Variational Inference |
ICLR | Title
Variational oracle guiding for reinforcement learning
Abstract
How to make intelligent decisions is a central problem in machine learning and artificial intelligence. Despite recent successes of deep reinforcement learning (RL) in various decision making problems, an important but under-explored aspect is how to leverage oracle observation (the information that is invisible during online decision making, but is available during offline training) to facilitate learning. For example, human experts will look at the replay after a Poker game, in which they can check the opponents’ hands to improve their estimation of the opponents’ hands from the visible information during playing. In this work, we study such problems based on Bayesian theory and derive an objective to leverage oracle observation in RL using variational methods. Our key contribution is to propose a general learning framework referred to as variational latent oracle guiding (VLOG) for DRL. VLOG is featured with preferable properties such as its robust and promising performance and its versatility to incorporate with any value-based DRL algorithm. We empirically demonstrate the effectiveness of VLOG in online and offline RL domains with tasks ranging from video games to a challenging tilebased game Mahjong. Furthermore, we publish the Mahjong environment and an offline RL dataset as a benchmark to facilitate future research on oracle guiding1.
1 INTRODUCTION
Deep reinforcement learning (DRL) has undergone rapid development in recent years (Sutton & Barto, 2018; Mnih et al., 2015; Vinyals et al., 2019). However, there is a common and important but under-explored aspect in RL: imagine that after playing a Poker game, a human player may look at the replay to check opponents’ hands and analyze this information to improve his/her playing strategy (or policy) for the next time. We refer to the information like opponents’ hands as oracle observation, defined as the information invisible to the agent during online task execution but available during offline training. By contrast, the information available during task execution is called executor observation. Such a scenario has been referred to as oracle guiding for RL (Li et al., 2020; Fang et al., 2021) (see Sec. 3 for a formal definition). Oracle guiding is common in real life. For example, when taking an examination (the oracle observation is the answers to similar questions, which are available only during preparing); and training a robot to perform some tasks on the Moon (when training the robot, we can provide it with the information of the territory, which are not available during execution). The type of oracle observation can be diverse, including hindsight information (Harutyunyan et al., 2019; Guez et al., 2020), human feedback (Knox & Stone, 2009; Loftin et al., 2016; MacGlashan et al., 2017), re-calibrated data with post-processing, and hidden states in a partially observed setting (Li et al., 2020).
While humans naturally perform oracle guiding when learning to make decisions, it remains challenging in RL. The difficulties include: (1) how to guarantee that learning with oracle observation
∗Work done during an internship in Microsoft Research Asia. Email: dongqi.han@oist.jp 1https://github.com/Agony5757/mahjong
improves the main decision model using executor observation only, and (2) if introducing an auxiliary loss leveraging oracle observation, how to tradeoff between the main loss and auxiliary loss. While recent studies attempted to model oracle guiding in RL (Guez et al., 2020; Li et al., 2020; Fang et al., 2021), none of them addressed these difficulties (refer to the Related Work section for more details). In particular, all these proposed methods are heuristic: although empirical results showed performance gain with oracle guiding, it is not theoretically guaranteed that the usage of oracle observation improves execution performance.
In this paper, we propose a fundamentally new idea for oracle guiding based on Bayesian theory. Taking Poker as an example, we know that the learning of optimal strategy is tractable if knowing the global, true state of the environment (or simply state2), including all visible or invisible cards, the opponents’ playing style, etc. (Azar et al., 2017; Jin et al., 2018; 2020). A key part of skill improvement is learning to estimate the probabilistic distribution of environmental state from executor observation. The common way to do this by human experts is to watch match replays where the oracle observation (e.g. opponents’ hands) is available, and then use the oracle-estimated state to correct the executor-estimated state. We interpret this to Bayesian language: executor-estimated state as the prior distribution, and the oracle-estimated one as the posterior distribution. Thus, the training objective can be considered two-fold: learning to make decisions based on posterior estimation of state, and learning a prior distribution of state closer to the posterior one.
We formulate this idea by proposing a novel learning framework for general oracle guiding problems based on variational Bayes (VB) (Kingma & Welling, 2014), referred to as variational latent oracle guiding (VLOG). VLOG owns several preferable properties. First, VLOG is theoretically guaranteed to leverage oracle observation for improving the decision model using executor observation. Second, VLOG is a versatile DRL framework that can be integrated into any value-based RL algorithm and is agnostic to the type of oracle observation. Third, VLOG does not suffer from the necessity of tuning additional hyper-parameters. Finally, we empirically show that VLOG contributes to better performance in a variety of decision-making tasks in both online and offline RL domains. The tasks include a simple maze navigation, video games playing, and a particularly challenging tile-based game Mahjong in which humans heavily leverage oracle observation in learning (Li et al., 2020). We also contribute to the community by taking Mahjong as a benchmarking task for oracle guiding and publishing the RL environment and dataset to facilitate future research.
2 RELATED WORK
In the past few years, research interests have grown on DRL and imitating learning (Chen et al., 2020) with leveraging oracle or hindsight information. For DRL, Guez et al. (2020); Fang et al. (2021) considered hindsight observation (executor observation at future steps) as the oracle observation during training. Guez et al. (2020) used hindsight observation to facilitate learning a representation of current state. Another method (Fang et al., 2021) was used for stock trading: the authors trained a teacher (oracle) policy with hindsight information, and employed network distillation to make the student policy behaves more similar to the teacher policy. Both the methods (Guez et al., 2020; Fang et al., 2021) are heuristic and focused on making use of future observation for a better sequential modeling, while VLOG is theoretically guaranteed for any kind of oracle observation. For applications in imperfect-information games, Suphx (Li et al., 2020), a DRL-based AI for Mahjong, also introduced a method to leverage oracle observation (opponents’ hand) for stronger performance. They concatenate oracle observation with executor observation as the input of policy network, where the oracle observation is timed by a scalar variable which is annealed from 1 to 0 during the training course. However, the method used in Li et al. (2020) is also heuristic and has only been tested in one task.
Variational Bayes (VB) is a well-established method and has been taken advantage of in RL. For example, control as probabilistic inference uses VB to connect the objective function of RL and variational lower bound of a probabilistic inference problem (Furmston & Barber, 2010; Weber et al., 2015; Levine, 2018). Our idea differs since it frames a value regression problem by the maximumlikelihood problem, and then, it applies VB to solve it (see Sec. 4). Also, the usage of VBian network models for DRL has captured attention from researchers recently. For example, Ha & Schmidhuber (2018) proposed to employ a VAE to reduce the high dimensionality of image observation; Igl et al.
2Generally, oracle observation does not necessarily contain all the information of environmental state.
(2018); Han et al. (2020); Lee et al. (2020) proposed variational RNNs as the state-transition models to encode the belief states of the agent; Yin et al. (2021) utilized a variational sequential generative model for predicting future observation and used the prediction error to infer intrinsic reward to encourage exploration; and Okada et al. (2020) demonstrated performance gain by using a deep Bayesian planning model in continuous control tasks. Our study differs from the mentioned works by focusing on oracle guiding, and VLOG does not involve learning a state transition model.
3 ORACLE GUIDING POMDP
Here we define the problem based on the Partially Observable Markov decision process (POMDP) (Sondik, 1978). An oracle guiding POMDP distinguishes from the original POMDP by having two types of observations: executor and oracle. The executor observation x is always available to the agent, on which the agent’s decision making (execution) relies. The oracle observation x̂ is not accessible during execution, but can be obtained afterward. x is included in x̂ since the former is always available. Thus x̂ contains no less information of the underlying environment state than x.
A formal definition of oracle guiding POMDP is a tuple 〈S,A,P0, T ,X, X̂,O, Ô, γ〉, where S and A are the state and action spaces, respectively. P0 specifies the initial state distribution such that P0(s) is the probability of a state s ∈ S being an initial state. T specifies the state transition probability such that T (s′, r|s, a) is the probability of reaching to a new state s′ ∈ S with an immediate reward r ∈ R after taking an action a ∈ A at a state s ∈ S. X denotes the executor observation space and X̂ denotes the oracle observation space. O specifies the executor observation probability such that O(x|s) is the probability of an executor observation x ∈ X at a state s ∈ S. Similarly, Ô specifies the oracle observation probability such that Ô(x|s) is the probability of an oracle observation x̂ ∈ X̂ at a state s ∈ S. γ ∈ [0, 1) is the discount factor. Value functions are the expected value of return from a particular state or state-action pair. For a policy π, its Q-function is defined by qπ(s, a) := Eπ[ ∑∞ n=0 γ
nrt+n|st = s, at = a], where Eπ indicates the expectation when the policy π is followed. The state value function vπ is defined by vπ(s) := Eπ(a|s)[qπ(s, a)]. The agent aims to maximize EP0(s)vπ(s) with respect to π, and the value-functions play an important role to this end (Sutton & Barto, 2018).
4 VLOG: VARATIONAL LATENT ORACLE GUIDING
Let us introduce a latent vector zt, which is a probabilistic variable representing the environmental state st. From a Bayesian perspective, we consider the prior distribution p(zt|xt) as the agent’s estimated probability density function (PDF) for zt based on executor observation xt. Meanwhile, the posterior distribution PDF q(zt|x̂t) is modeled based on oracle observation x̂t. In RL, the most basic requirement is to make a good estimation of return by a value function approximator v(xt) := ∫ v(zt)p(zt|xt)dzt (we denote it by v, but it can also be a Q function by simply replacing xt with (xt,at)) based on available information, i.e. executor observation xt. The target of return, denoted by vtart , can be estimated by any value learning algorithm such as TD(0) (Sutton & Barto, 2018), Peng’s Q(λ) (Kozuno et al., 2021), etc. (generally, vtar can always be given by Bellman equations. However, one can also use Monte-Carlo return as vtar if available). In particular, we employed double Q-learning with dueling architecture (Wang et al., 2016) to compute vtar due to its effectiveness and simplicity (Sec. 5 and B.1). We want to maximize the log-likelihood objective of the estimation of return based on executor observation xt (i.e., for the executor model)
L := logP ( v(xt) = v tar t |xt ) = log ∫ zt P ( v(zt) = v tar t |zt ) p(zt|xt)dzt
= log ∫ zt q(zt|x̂t) q(zt|x̂t) P ( v(zt) = v tar t |zt ) p(zt|xt)dzt.
By Jensen’s inequality, we have L ≥ ∫ zt q(zt|x̂t) log P (v(zt) = vtart |zt) p(zt|xt) q(zt|x̂t) dzt
= ∫ zt [ q(zt|x̂t) logP ( v(zt) = v tar t |zt ) − q(zt|x̂t) log q(zt|x̂t) p(zt|xt) ] dzt
= Eq(zt|x̂t) [ logP ( v(zt) = v tar t |zt )]︸ ︷︷ ︸ oracle prediction error −DKL (q (zt|x̂t)‖p (zt|xt))︸ ︷︷ ︸ regularization term := LVLOG. (1)
Thus we can maximize our initial objectiveL via maximizingLVLOG, which is also known as the variational lower bound (Kingma & Welling, 2014), but in our oracle-guiding scheme. Since p(zt|xt) and q(zt|x̂t) represents the PDF of the latent vector obtained from executor observation and oracle observation, respectively, the meanings of the two terms in LVLOG appear clear now: the first term, i.e., oracle prediction error, helps to improve value estimation from posterior latent state distribution (zt computed with oracle observation); and the second term, i.e., regularization term, helps to shape the prior representation of zt closer to the posterior one as latent oracle guiding. We would like to highlight that the VLOG objective is the lower bound of the objective for the prior executor model v(xt) (the estimation of return using executor observation xt) with the usage of oracle observation x̂t. This lower bound guarantees that the usage of oracle observation facilitates the learning of the executor model, which is our original motivation. Remark 1. One may use any shape of the approximate posterior q, depending on which different instance of VLOG is possible. Furthermore, one may directly use v(xt) instead of p(zt|xt). These design choices allow users to incorporate any prior knowledge on oracle observation. For example, if one knows the range of a state-value at xt is a closed interval [l, u], the approximate posterior q(vt|x̂t) can be restricted to a family of probability distributions supported on [l, u].
4.1 IMPLEMENTATION WITH NEURAL NETWORKS
Inspired by the implementation of variational auto-encoder (VAE, Kingma & Welling (2014)), we propose the neural network architecture of VLOG (Fig. 1). The executor observation xt and oracle observation x̂t are processed by two distinct encoder networks to compute the prior and posterior distribution of the latent vector, respectively. During training, both xt and x̂t are available, and all the network parameters are updated by maximizing the VLOG objective in an end-to-end manner (Fig. 1A). During execution, the agent computes prior distribution p(z|xt) for decision making (Fig. 1B) without using oracle observation. zt is computed by parameterized normal distribution
p(zt|xt) = N (µpt , exp (logσ p t )), (µ p t , logσ p t ) = prior encoder(xt), q(zt|x̂t) = N (µqt , exp (logσ q t )), (µ q t , logσ q t ) = posterior encoder(x̂t).
For computing P(v(zt) = vtart |zt) in Eq. 1, we simply assume it follows normal distribution, and estimate it with mean square error between v(zt) and vtart in practice. The reparameterization trick is used to perform end-to-end training as in VAE (Kingma & Welling, 2014). Then the output of the decoder (value function) can be obtained by v(zt) = decoder(zt). Note that zt is obtained using the posterior encoder during training, and using the prior encoder during execution (Fig. 1A, B).
4.2 TASK-AGNOSTIC LATENT BOTTLENECK CONTROL
To learn a better representation, we borrow the idea from β-VAE (Higgins et al., 2016) to multiply a coefficient β to the regularization term. Thus we have the loss function (negative lower bound) as
J βVLOG = −Eq(zt|x̂t) [ logP ( v(zt) = v tar t |zt )] + βDKL (q (zt|x̂t)‖p (zt|xt)) . (2)
The hyper-parameter β controls the capacity of the latent information bottleneck (Tishby & Zaslavsky, 2015; Alemi et al., 2017). We found the choice of β is important for the performance of VLOG in RL (see Appendix B.3). However, having extra hyper-parameters are not desired. Inspired by the method used in Burgess et al. (2017) for controlling the scale of KL divergence in β-VAE, we propose a task-agnostic method to automatically adjust β by setting a target of KL divergence DtarKL. In particular, we minimize the auxiliary loss function (β as the being optimized parameter)
Jβ = (log10DtarKL − log10DKL (q (zt|x̂t)‖p (zt|xt))) log(β). (3)
The intuition here is to strengthen the regularization by increasing β when the divergence between prior and posterior is too large, and vice versa. This method is similar to that used in soft actor-critic for automatically adjusting the entropy coefficient (Haarnoja et al., 2019), while we used this for KL divergence coefficient. Importantly, we found a well-performing value DtarKL = 50 agnostic to other design choices. It worked well in a range of different tasks and networks (Sec. 5). Therefore, we do not need to tune β. We provide more discussion about this method in Appendix D.
5 EXPERIMENTS
How does VLOG perform in practice? We investigated the empirical performance of VLOG in three types of tasks using online or offline RL, from simple to difficult. In the following experiments, we used double DQN with dueling network architecture (van Hasselt et al., 2016; Wang et al., 2016) as the base RL algorithm, the model and loss functions for RL are defined in Appendix. B.1. As DRL is susceptible to the choice of hyper-parameters, introducing any new hyper-parameters might obscure the effect of oracle guiding. Double DQN and dueling architecture are preferable for the base algorithm since they require no additional hyper-parameters, in contrast to other DQN variants (Hessel et al., 2018), such as prioritized experience replay (Schaul et al., 2016), noisy network (Fortunato et al., 2018), categorical DQN (Bellemare et al., 2017), and distributed RL (Kapturowski et al., 2018). Importantly, we used the same hyper-parameter setting for all methods and environments as much as possible (see Appendix B.2).
5.1 MAZE
We first demonstrate how VLOG helps to shape the latent representation by leveraging oracle observation in learning. The testbed is a maze navigation task3 (Fig. 2A) with 10×10 grids. The executor observation is the (x, y) position, where x, y are continuous values randomly sampled within each grid of the maze (thus the observations in two adjacent but wall-separated grids may be very close). At each step, the agent selects an action (going up, down, right, or left) and moves to another grid if not blocked by wall. We provided the VLOG agent with oracle observation (xc, yc, dg) during training, where xc, yc are coordinates of the center of the current grid, and dg is the (shortest) path distance to goal from the current grid. It is intuitive that although the raw observation is (x, y), dg matters more in a maze navigation task. We empirically investigated how much such oracle observation used in learning of VLOG could help to shape the latent representation of z w.r.t. dg rather than position. The encoder and decoder were both 2-layers multi-layer perceptrons (MLP) with width 256 and ReLU activation. The size of latent vector zt for VLOG was 128 since we computed both µ and σ (Appendix C).
Experiments show that the baseline agent struggled to reach the goal (Fig. 2B), while VLOG agents stably solved the task after learning. To check how the usage of VLOG affected the learned latent representation, we visualize the latent representation of both VLOG and baseline models with principal component analysis (PCA, Pearson F.R.S. (1901)). In Fig. 2C, we map path distance to goal dg to color and plot the scores of the first 2 PCs of z (computed using executor observation)
3https://github.com/MattChanTK/gym-maze
for VLOG and the corresponding latent state for baseline using successful trials (“latent layer” in Fig. 1C). The latent state of VLOG showed a relatively smoother and more interpretable representation of distance-to-goal comparing to that of baseline. We then plot the latent representations for different positions in the maze in Fig. 2D. The latent state of VLOG more clearly represented dg , consistent with the result (Fig. 2C). In particular, we looked into a rectangle region (denoted by rectangles in Fig. 2D) inside which the left 2 grids and right 2 grids are segregated by a wall. We found the corresponding areas in the latent PC space and circled them in Fig. 2C. While these 4 grids are close in (x, y) (executor observation), their distances-to-goal (oracle observation) are highly distinguished. By leveraging oracle guiding using VLOG, the agents can clearly differentiate the left 2 grids and the right 2 grids in latent space as shown in Fig. 2C, D, left (note that the latent state z here of VLOG was computed using executor observation only). By contrast, the latent representations of these grids were overlapped for the baseline model, which did not utilize the oracle observation (Fig. 2C, D, right). In sum, we demonstrated with a toy example that VLOG effectively helped the latent space to couple with oracle state useful for the task. The following sections will transfer to experiments on more complicated tasks and discuss how VLOG can improve practical performance.
5.2 NOISY MINATAR
To evaluate how VLOG scales with more high-dimensional state space, we tested it on a set of MinAtar videos games. MinAtar (Young & Tian, 2019) is a test platform for AI agents, which implements 5 miniaturized Atari 2600 games with discrete actions (Seaquest, Breakout, Space Invaders, Freeway and Asterix). MinAtar is inspired by Arcade Learning Environment (Bellemare et al., 2013) but simplifies the environments for efficiency. The observation is 10×10 pixels with multiple channels indicating different objects. In real-world, the observation usually contains some noise. Thus it is natural to consider the noisy observation as the partially-observable executor observation, and the original, non-noisy observation as the oracle one. Suppose that at each frame, each pixel may “break” randomly with an independent probability of 1/8 (Fig. 3A). The original observation at a broken pixel is erased and is replaced by a different value in all channels. We consider such noisy MinAtar environments with the noisy pixels as the executor observation and the original pixels as the oracle observation.
The network structure was the same as that for Maze, but the encoder was replaced by a CNN (Appendix C). We ran experiments on all the 5 environments of MinAtar with VLOG as well as
baseline, oracle and alternative oracle guiding methods (see Appendix A for details). Baseline model always uses executor observation as network input (Fig. 1C). Oracle is the same as baseline except that it always receives oracle observation (i.e., cheating, we ran the experiments of oracle for reference). VLOG-no oracle is an ablation of VLOG where we use the executor observation as the input to posterior encoder in VLOG (oracle observation is not used). Suphx-style oracle guiding is the oracle guiding method used in the Mahjong AI Suphx (Li et al., 2020), in which the executor observation and the dropout-ed oracle observation (with dropout probability pdropout) are concatenated as the input to the network. With training proceeds, pdropout is gradually increased from 0 to 1, thus the trained network does not need oracle observation for input (Appendix A). OPD-style oracle guiding is the oracle guiding method used in oracle policy distillation (OPD) (Fang et al., 2021). OPD-style oracle guiding first trains a teacher model using oracle observation as input, and then trains the executor model using an auxiliary loss, which is the error between the executor and the teacher models’ estimation for value function (Appendix A).
The results show that oracle performed usually the best, as expected. We normalized the performance of non-oracle models using oracle model as reference, for more clear comparison (Fig. 3B). Among all the oracle guiding methods (VLOG, OPD-style and Suphx-style), VLOG consistently performed the best. It is notable that VLOG and VLOG-no oracle performed surprisingly well in Seaquest. This can be explained by that Seaquest is a task with a local optimum (see Appendix E), while the stochasticity in hidden states of VLOG helped exploration in latent space to escape from the local optimum (a similar idea is Fortunato et al. (2018), but their noise was added to the network weights). Except in Seaquest, VLOG-no oracle did not show significant performance different with baseline, showing that the performance gain of VLOG in this task set mainly came from leveraging oracle observation for shaping latent distribution; and the usage of variational Bayesian model was, at least not harming the performance when there is no helpful oracle information.
5.3 OFFLINE LEARNING ON MAHJONG
Mahjong is a popular tile-based game with hundreds of millions of players worldwide (here we consider the Japanese variant). The game is like many other card games (but using tiles instead of cards), in which multiple (usually four) players draw and discard tiles (totally 136 tiles) alternatively to satisfy winning conditions. It is a highly challenging game characterized with (1) imperfect information in executor observation (a player cannot see opponents’ private tiles and the remaining tiles to be drawn), (2) stochastic state transitions as in many card games, and (3) extremely high game complexity (i.e., the number of distinguished, legal game states). Complexity of Mahjong is much larger than 10166 (Appendix F.1). For reference, complexity of Go is ∼ 10172 (Silver et al., 2016) and complexity of no-limit Poker is ∼ 10162 (Johanson, 2013). In Mahjong, it is hard to make optimal decisions based on executor observation because the outcomes heavily depend on invisible information, and complexity of invisible state space is as high as 1048 on average (Li et al., 2020). In response to this challenge, Li et al. (2020) introduced suphx-style oracle guiding and demonstrated a performance gain. Thus, we consider Mahjong as a promising test platform for oracle guiding methods. Since the number of possible states in Mahjong is extremely large. It is costly to explore with random actions in an online RL manner, and no pre-
vious work could train a strong Mahjong AI with purely online RL. Also, we would like to examine the effectiveness of VLOG in offline RL settings. For these reasons, we transferred to offline RL (Levine et al., 2020) for the Mahjong task using expert demonstrations.
We processed about 23M steps of human experts’ plays from the online Mahjong game platform Tenhou (https://tenhou.net/mjlog.html) to a dataset for offline RL (data were augmented using the symmetry in Mahjong, see Appendix F). Also, we created a simulator of Mahjong as the testing environment. Though there are sophisticated ways to encodes the state and action space of Mahjong (Li et al., 2020), we attempts to make simplifications with reasonable amounts of approximations since our goal is to not create an strong Mahjong AI, but to use Mahjong as a platform to study oracle guiding problems. In our case, the action space is composed of 47 discrete actions covering all decisions in Mahjong. An executor observation is a matrix encoding public information and the current player’s private hand; An oracle observation concatenates the executor observation with the information of the opponents’ private hands. (see Appendix F). We used an 1-D CNN as the encoder as done in Mahjong AIs commonly (Li et al., 2020), and the size of zt and the decoder network width was increased to 512 and 1024, respectively (Appendix C).
Note that although Mahjong is a 4-player game, using offline RL data to train an agent does not involve multi-agent RL (Zhang et al., 2021) because the offline dataset is fixed: the opponents have fixed policies and thus can be considered as parts of the environment. Our experiments focused on single-agent RL to avoid the complexity caused by considering multi-agent RL. We investigated two kinds of offline RL settings. The first is conservative Q-learning (CQL) (Kumar et al., 2020). Our CQL setting differed from the online RL setting in previous sections by adding auxiliary CQL loss (Kumar et al., 2020) to the Q-learning loss function (Appendix B.1.2). The other is behavior cloning (BC). Although VLOG was designed for value-based RL, we could straightforwardly incorporate VLOG with BC by letting the network predict action instead Q function. The learning was conducted by minimizing the cross entropy between the output and the target action (demonstration) as in a classification problem. Note that we did not test OPD-style oracle guiding in BC setting because it was equal to the baseline since we can directly use demonstration actions as oracle policy for distillation.
Because Mahjong is a zero-sum, four-player game, we tested the performance of trained models in two scenarios: playing with the (trained) baseline model (Table 1) and playing with each other (fight each other, Table 2). In the first scenario, four agents were playing the same matches on the game table, where two of them were being tested agents and the other two were the baseline models. Although each agent played for itself and there was no communication between players, we simply added up the payoff of the two being tested agents, and consider they won a match if one of them ranked top (so the match win rate will be 50% if equally strong as baseline), for statistics (Table 1).
For CQL, the results (Table 1 left and 2 upper) show that VLOG substantially outperformed the baseline and alternative methods (because Mahjong is a highly random game, 55.7% match win rate indicates a large skill gap). Interestingly, VLOG was even comparable to oracle. This can be explained by that VLOG also benefited from its Bayesian property, which is consistent with that VLOG-no oracle showed a significant performance gain over the baseline model (Table 1 left). Still, the oracle model learned to reduce deal-ins (i.e., player discards a tile and another player wins the game by picking up this tile to compose a winning hand) since it could explicitly see the opponents’ private tiles, showing a much lower deal-in rate than other non-cheating models (Table 2 upper).
In BC setting, the agents did not learn a value function, but tried to predict human experts’ actions. Therefore, the training procedure did not involve reasoning the relationship between playing outcome and oracle observation, but just imitating human behaviors. This can be seen from the results that oracle did not substantially outperform baseline in BC (Table 1 right and 2 lower). However, VLOG and VLOG-no oracle still showed performance gain, thanks to the stochastic modeling.
6 SUMMARY
We have proposed VLOG – a variational Bayesian learning framework for leveraging oracle observation to facilitate DRL especially in partially observable environments. VLOG is available for any RL problem in which there is oracle observation that may help the executor to make decisions.
We first introduced a latent vector z to represent the environmental state. The prior and posterior distribution of z is modeled using executor and oracle observation, respectively. Then, we derived a variational lower bound (Eq. 2) by maximizing which we can optimize the executor model using the oracle observation. We developed the corresponding methodology for DRL, which can be incorporated with most RL algorithms that need to estimate a value function.
If oracle observation contains more information to retrieval the true environmental state (or, it is the true environmental state), VLOG’s oracle guiding in latent space helps to shape a latent representation in neural networks closer to the true one. We demonstrated this advantage of VLOG using the maze task. Then, we scaled VLOG up to solve image-based videos games, and compared it with alternative oracle-guiding methods. Though all oracle-guiding methods showed performance gain over the baseline model, VLOG performed consistently the best. Finally, we transferred to offline RL domain using a challenging tile-based game Mahjong in which an executor plays with hidden information and random state transitions, and observed VLOG achieved best overall performance.
We also conducted an ablation study of VLOG (VLOG-no oracle) in which posterior model did not receive oracle observation, but executor one. VLOG-no oracle demonstrated performance gain in the tasks that may benefit from the stochasticity; otherwise, it performs similar to the deterministic baseline. This clarified that the source of VLOG’s promising performance is two-fold: oracle guiding and stochastic modeling. Finally, we publish the dataset of Mahjong for offline RL and the corresponding RL environment so as to facilitate future research on oracle guiding.
ACKNOWLEDGEMENT
This work was supported by Microsoft Research Asia. Kenji Doya was supported by Japan Society for the Promotion of Science KAKENHI Grant Numbers JP16K21738, JP16H06561 and JP16H06563, as well as by Okinawa Institute of Science and Technology.
REPRODUCIBILITY STATEMENT
The source code of VLOG can be found in Supplementary Material.
ETHICS STATEMENT
We declare no conflict of interests. We tried to use friendly colors to people with color recognition disabilities (Fig. 2C, D) and distinguishable markers for performance curves of different models (Fig. 2B, Fig. 3 and Fig6). Our Mahjong dataset was generated using downloadable, public game replay data from Tenhou.net with post-processing. The dataset contains no private information about players. Since VLOG is a general framework for leveraging the oracle information, we cannot foresee any direct application of VLOG to malicious purposes. However, any new RL algorithm might confer increased autonomy on an agent, and eventually lead to a completely autonomous agent, which can be used for malicious purposes, e.g., fully autonomous soldiers.
B RL ALGORITHMS AND HYPER-PARAMETERS
B.1 RL ALGORITHMS
B.1.1 DUELING DOUBLE DQN FOR MAZE AND MINATAR TASKS
As we discussed in Sec. 5, we used double DQN with dueling network architecture (van Hasselt et al., 2016; Wang et al., 2016) as the base RL algorithm, because it work relatively well (Hessel et al., 2018) without introducing additional hyper-parameter.
The Dueling architecture of DQN (Wang et al., 2016) is defined as follows (see Appendix C for hidden layer size):
c l a s s DuelingQNetwork ( nn . Module ) : def i n i t ( s e l f , i n p u t s i z e , ac t ion num , h i d d e n l a y e r s ) :
super ( DuelingQNetwork , s e l f ) . i n i t ( ) s e l f . i n p u t s i z e = i n p u t s i z e s e l f . a c t i o n n u m = a c t i o n n u m s e l f . h i d d e n l a y e r s = h i d d e n l a y e r s s e l f . ne twork modu le s = nn . Modu leL i s t ( ) l a s t l a y e r s i z e = i n p u t s i z e f o r l a y e r s i z e in h i d d e n l a y e r s :
s e l f . ne twork modu le s . append ( nn . L i n e a r ( l a s t l a y e r s i z e , l a y e r s i z e ) ) s e l f . ne twork modu le s . append ( nn . ReLU ( ) ) l a s t l a y e r s i z e = l a y e r s i z e
s e l f . v a l u e l a y e r = nn . L i n e a r ( l a s t l a y e r s i z e , 1 ) s e l f . a d v a n t a g e l a y e r = nn . L i n e a r ( l a s t l a y e r s i z e , a c t i o n n u m ) s e l f . ma in ne twork = nn . S e q u e n t i a l (* s e l f . ne twork modu le s )
def f o r w a r d ( s e l f , x ) : h = s e l f . ma in ne twork ( x ) v = s e l f . v a l u e l a y e r ( h ) . r e p e a t i n t e r l e a v e ( s e l f . ac t ion num , dim = −1) q0 = s e l f . a d v a n t a g e l a y e r ( h ) a = q0 − t o r c h . mean ( q0 , dim = −1 , keepdim=True ) . r e p e a t i n t e r l e a v e (
s e l f . ac t ion num , dim = −1) q = v + a re turn q
Double deep Q-learning (van Hasselt et al., 2016) was used to compute Qtarget in Fig. 1 A (one can use any other algorithm to compute Qtarget without changing other parts). In particular, as in Wang et al. (2016), we have
Qtargett = rt + γQ(zt, argmax a′ Q(zt+1, a ′; θ); θ−),
where rt is the reward at step t, γ is the discount factor (Table 3), θ denotes the parameters of the Q network (MLP decoder) for computing the Q-function (Appendix C). Note that z is given by the posterior decoder with oracle observation x as input, since the oracle prediction error term Eq(zt|x̂t) [logP (v(zt) = vtart |zt)] in Eq. 1 is the expectation over the posterior distribution q(z|x̂). Following deep RL normals (Mnih et al., 2015; Wang et al., 2016; van Hasselt et al., 2016), we used a target Q network with the same structure as the original Q network, of which the parameters are denoted as θ− (Table 3). Every 1,000 steps, the target Q network copies the parameters from the original Q network (Table 3). Then the first term of the VLOG loss function (Eq. 2) is simply given by the mean square error between Qtarget and the output of Q network (MLP decoder) for the Maze and MinAtar tasks.
B.1.2 DUELING DOUBLE DQN WITH CONSERVATIVE Q LEARNING FOR MAHJONG
In Mahjong, as we transfer to offline RL domain (Sec. 5.3), directly using off-policy RL algorithm usually results to very unsatisfying performance (Levine et al., 2020).
Therefore, we complement the loss function of VLOG (Eq. 2) with an auxiliary conservative Qlearning (CQL) loss (Kumar et al., 2020),
JCQL = αEx̂,a∼D [ log
∑ a′∈A exp(Q(z(x̂), a′, θ))− [Q(z(x̂), a, θ)]
] ,
where D is the offline dataset we used for Mahjong and α = 1. The combined loss function used for Mahjong (CQL) is J βVLOG + JCQL.
B.2 HYPER-PARAMETER SELECTION
We summarize the hyper-parameters in Table 3.
B.3 SENSITIVITY ANALYSIS FOR β
As we mentioned in Sec. 4.2, the coefficient β is important to the learning of VLOG agents. If we fix the value of β throughout training, a too large or too small β will result in worse performance. The corresponding results are shown in Fig. 6.
C NETWORK STRUCTURES
For simplicity, we use the same hidden layer size for all the fully connected layers, where the hidden layer size is 256 for maze and MinAtar, and 1024 for Mahjong.
C.1 ENCODER
Since we targeted at various tasks, we used different encoder network for each type of environment. The prior and posterior encoder has the same structure except different sizes of input features/channels.
For the maze task, the encoder was a 2-layers MLP with ReLU activation. The output size is also equal to hidden layer size.
For the MinAtar tasks, we used 2-D CNN encoder defined as follows:
import t o r c h . nn as nn c n n m o d u l e l i s t = nn . Modu leL i s t ( ) c n n m o d u l e l i s t . append ( nn . Conv2d ( n c h a n n e l s , 16 , 3 , 1 , 0 ) ) c n n m o d u l e l i s t . append ( nn . ReLU ( ) ) c n n m o d u l e l i s t . append ( nn . Conv2d ( 1 6 , 32 , 3 , 1 , 0 ) ) c n n m o d u l e l i s t . append ( nn . ReLU ( ) ) c n n m o d u l e l i s t . append ( nn . Conv2d ( 3 2 , 128 , 4 , 2 , 0 ) ) c n n m o d u l e l i s t . append ( nn . ReLU ( ) ) c n n m o d u l e l i s t . append ( nn . Conv2d ( 1 2 8 , 256 , 2 , 1 , 0 ) ) c n n m o d u l e l i s t . append ( nn . ReLU ( ) ) c n n m o d u l e l i s t . append ( nn . F l a t t e n ( ) ) c n n m i n a t a r = nn . S e q u e n t i a l (* c n n m o d u l e l i s t )
where n channels is the number of channels of executor or oracle observation. The output size is equal to hidden layer size.
For Mahjong, because the second dimension of observation (tile ID) has local contextual relationship (Li et al., 2020), we used 1-D CNN (convolution along the tile ID dimension) as the encoders, defined as follows:
c n n m o d u l e l i s t = nn . Modu leL i s t ( ) c n n m o d u l e l i s t . append ( nn . Conv1d ( n c h a n n e l s , 64 , 3 , 1 , 1 ) ) c n n m o d u l e l i s t . append ( nn . Conv1d ( 6 4 , 64 , 3 , 1 , 1 ) ) c n n m o d u l e l i s t . append ( nn . ReLU ( ) ) c n n m o d u l e l i s t . append ( nn . Conv1d ( 6 4 , 64 , 3 , 1 , 1 ) ) c n n m o d u l e l i s t . append ( nn . ReLU ( ) ) c n n m o d u l e l i s t . append ( nn . Conv1d ( 6 4 , 32 , 3 , 1 , 1 ) ) c n n m o d u l e l i s t . append ( nn . ReLU ( ) ) c n n m o d u l e l i s t . append ( nn . F l a t t e n ( ) ) cnn mahjong = nn . S e q u e n t i a l (* c n n m o d u l e l i s t )
The output size is 1088, close to hidden layer size.
C.2 LATENT LAYER
For VLOG and VLOG (no-oracle), the size of z layer is half of hidden layer size because we need to estimate both the mean and the variance of z. For all the other models (baseline, oracle, OPD-style, Suphx-style), the latent layer is one fully connected layer with size hidden layer size and ReLU activation.
C.3 DECODER
The decoders for all models were 2-layers MLPs with size hidden layer size. Except BC on Mahjong, the input of decoder was the output of the latent layer concatenated with the action, and we used the dueling Q-network structure (Wang et al., 2016) to output a scalar Q value. ReLU activation was used except for the output.
For BC on Mahjong, the input of decoder was the output of the latent layer. The outputs of decoder was logit of actions, and the actions could be obtained using softmax.
D TASK-AGNOSTIC LATENT BOTTLENECK CONTROL
As we discussed in Sec. 4.2, the coefficient β of the regularization term in the VLOG loss function (Eq. 2) is adaptively regularized by Eq. 3, provided with DtarKL, which is another hyper-parameter. While our experiments demonstrated the effectiveness of this approach, the follows discuss more thoughts behind this design choice.
In principle, replacing a hyper-parameter with another does not always make training easier. However, in practice (especially deep RL), the performance will be highly sensitive to some hyperparameters (e.g., the entropy coefficient α in the original soft-actor critic algorithm (Haarnoja et al., 2018) need to be tuned in each robotic task. This is because the reward magnitude is different among tasks.). As a result, it will be beneficial to replace a sensitive hyper-parameter with another one that does not need fine tuning. For example, in the follow-up paper of soft-actor critic, the authors used adaptive entropy coefficient α by introducing another hyper-parameter, the entropy target (Haarnoja et al., 2019). They empirically found that it is good to set the entropy target equal to the negative of the degree of freedom of the agent, thus to avoid tuning α.
Our idea of replacing β with DtarKL is due to similar reasons. One obvious problem is that the magnitude of the “oracle prediction error” term (Eq. 2) relies on the reward magnitude of the task. Therefore β should also be adjusted to match with the magnitude of task reward. However, DtarKL is only relevant to the magnitude of prior z and posterior z, which does not differ too much among tasks (usually at the order of 1). In practice, we found that DtarKL = 50 works well for all the tasks including Maze, MinAtar and Mahjong.
Another way of regularizing β is to employ a linear or exponential scheduler. For example, in Burgess et al. (2017), the authors used a linear scheduler for the target KL-divergence and got good
results. However, using a scheduler introducing more hyper-parameters (at least two: initial β and final β), which is against our intention to reduce the impact of hyper-parameters.
E SEAQUEST LOCAL OPTIMUM
In Seaquest, the agent drives a submarine diving into the sea to shoot at enemies and rescue divers to earn scores. However, the submarine has limited oxygen. It must surface to replenish the oxygen before running out for surviving, by doing which it temporarily cannot earn scores. A typical local optimum is to use the last remaining oxygen for diving instead of surfacing.
F MAHJONG
F.1 ESTIMATION OF GAME COMPLEXITY OF MAHJONG
We consider 4-players Japanese Mahjong4. Although there are minor variants of rules, the following estimation applies to general cases.
For easier computation, we make two major simplifications (i.e., we estimate a lower bound of the game complexity): (1) melding from discard5 is not considered and (2) the information other than tiles, such as results of contextual games, points of players, is not considered.
There are 34 types of tiles, each with 4 duplicates (so totally 136 tiles). We further restrict our estimation to the last turn (i.e., the last tile is drawn). Among 136 tiles, 53 tiles are in someone’s hand (the 4 players have 14, 13, 13, 13 tiles in hands, respectively). Permutation of tiles in one’s hand does not make a different, while the permutation matters if not in one’s hand. The number of distinguishable configurations of 136 tiles thus can be computed as 136!(13!)3×14!×(4!)34 ∼ 10 145
Meanwhile, for each discarded tile, it is important to know whether it was discarded immediately after being drawn or not. For 70 discarded tiles, the number of possibilities is simply 270 ∼ 1021. Therefore, the lower bound of game complexity of Mahjong is estimated as
10145 × 1021 ∼ 10166.
If considering the other being simplified information, the state space could be much larger. For example, let’s consider the current points of each player. The most common rule is that each player starts with 25,000 points, with 100 points as the minimal unit (totally 1,000 units), and the game terminates if someone got negative points. So the number of possibilities can be converted to the answer of “how many ways to distribute 1000 candies to 4 kids”, which is (1000+1) (4−1)
(4−1)! ∼ 10 8.
F.2 DETAILS OF OBSERVATION SPACE AND ACTION SPACE ENCODING
In our Mahjong environment6, the action space is composed of 47 discrete actions (Table 5). Because not all actions are available at a certain step, we also provide the set of valid actions among all 47 actions at each step according to the rules, which can be used during playing and learning.
The oracle observation has the shape of 111 × 34 (executor observation is a part of oracle observation, with shape 93 × 34) (Fig. 5). The first dimension corresponds to 111 features (channels). The second dimension of observation (with size 34) corresponds to 34 Mahjong tiles (the order is Character 1-9, Dot 1-9, Bamboo 1-9, East, South, West, North, White, Green, Red). We used 1-D CNNs with convolution along the second dimension for the encoders. Suppose the current player is player 0, and other players are numbered 1, 2, 3 counter-clockwise. The value of any element in an observation is 1 or 0, explained in Table 4.
4https://en.wikipedia.org/wiki/Japanese Mahjong 5A meld is a specific pattern of three or four tiles. A player can pick up a discarded tile from others to form a meld by displaying the meld to public, if certain condition is satisfied. 6The Mahjong environment we used in this papers is available on https://github.com/pymahjong/pymahjong for reproducibility. However, we recommend to use the newer version https://github.com/Agony5757/mahjong which is better-supported by the authors and much faster.
F.3 DATA AUGMENTATION
In (Japanese) Mahjong, there are 3 suit sets of (characters, dots, bamboos) tiles, 4 wind tiles and 3 dragon tiles. The 3 suit sets are symmetric and thus exchangeable with each other7. So are the 4 wind tiles and the 3 dragon tiles. According to such symmetry, we augmented the offline RL dataset by randomly exchanging the 3 suit sets, the 4 wind tiles and the 3 dragon tiles, respectively.
7There is one exemption of winning hand pattern “all green”. Since it is an extremely rare case, we simply ignore it. | 1. What is the focus of the paper regarding the oracle-guiding RL problem?
2. What are the strengths and weaknesses of the proposed VLOG model?
3. How does the reviewer assess the clarity and quality of the writing?
4. What concerns does the reviewer have regarding the differences between offline RL and oracle-guided RL, theoretical results, label v^{tar}_{t}, hyperparameters, and scheduler design?
5. What minor suggestions does the reviewer offer for improving the paper's presentation? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposed a variational-based model VLOG for solving the oracle-guiding RL problem. By minimizing the KL-divergence between the latent features from the oracle observations and the executor observations, the model can apply the oracle signals to boost the execution performance. The author(s) derived a lower bound objective as the training loss and empirically demonstrated the performance of their method under three different types of environments.
Review
The paper is easy to follow and generally well-written. The proposed method is intuitive and effective, as an application of variational models. The oracle-guiding problem is novel and intriguing. It is similar to the offline RL, but there are some key differences that can distinguish them. I urge the author to formally define the problem to help the readers who are less familiar with the oracle-guiding framework. I summarize some of my concerns in the following:
concerns:
Can you summarize the difference between offline RL and oracle-guide RL. I think the key difference is the training observations (or oracle observations) contain a richer signal than the observations in the testing environment (or executor observation), but there could be more. Please formally define the oracle-guiding RL problems anyway.
The author claims "VLOG is theoretically guaranteed to ...", where are the theoretical results? I think we can show by maximizing this variational lower bound, the KL divergence between the true posterior and the approximate posterior will be reduced, but this is a well-known property. Where are the new contributions?
How do you get the label v^{tar}_{t}? (in (1)). I assume a DQN or a critic network must be implemented. If it is the case, please define the model and its loss.
Will it make things easier by replacing a hyper-parameter \beta with another hyper-parameter D^{tar}_{KL}? I guess both hyper-parameters are difficult to determine. In fact, other works might design a linear or exponential scheduler for \beta that allows it smoothly grow from a min (in most cases, 0) to a max value during training. As far as I know, this method works quite well in practice.
Minors:
Many sentences are repeated in the abstract and introduction, but readers with a reasonable RL/ML background can follow this message with one scan. Maybe use another example for better elaboration.
In remark one, the paper says "they can choose q(v_t|\hat_{x}_t) to be p(v_t|x_t)u(v_t; [l,u])". Are you multplying two distrbutions? In this sense, q will not define any distribution.
Poker games involve two players competing with each other. In this sense, maybe a (PO)Markov Game model can better describe this environment than a (PO)MDP.
typos:
the oracle-estimated one as posterior distribution -> the oracle-estimated one as the posterior distribution. |
ICLR | Title
AdaFocal: Calibration-aware Adaptive Focal Loss
Abstract
Much recent work has been devoted to the problem of ensuring that a neural network’s confidence scores match the true probability of being correct, i.e. the calibration problem. Of note, it was found that training with focal loss leads to better calibrated deep networks than cross-entropy, while achieving the same level of accuracy Mukhoti et al. (2020). This success stems from focal loss regularizing the entropy of the network’s prediction (controlled by the hyper-parameter γ), thereby reining in the network’s overconfidence. Further improvement is expected if γ is selected independently for each training sample. However, the proposed Sample-Dependent Focal Loss (FLSD) in Mukhoti et al. (2020) is based on simple heuristics that does not take into account the difference in the network’s calibration behaviour for different samples (or groups of samples). As a result it is only slightly better than focal loss with fixed γ. In this paper, we propose a calibration-aware version of FLSD, called AdaFocal, which, at every training step t, adaptively modifies the γ for individual group of samples based on (1) γt−1 from the previous training step (2) the magnitude of the network’s under/over-confidence for those groups. We evaluate our method on various small to large-scale image recognition tasks and one NLP task, covering a variety of network architectures, to confirm that AdaFocal consistently achieves improved calibration without a significant loss in accuracy. Further, the models trained with AdaFocal are shown to have significantly improved Out-of-Distribution (OOD) detection capability.
1 INTRODUCTION
Neural networks have found tremendous success in almost every field including computer vision, natural language processing, and speech recognition. Over time, these networks have grown complex and larger in size to achieve state-of-the-art performance and they continue to evolve further in that direction. However, it has been well established that such high capacity networks suffer from poor calibration Guo et al. (2017), i.e. the confidence scores of the predictions do not reflect the real world probabilities of those predictions being true. For example, if the network assigns 0.8 confidence to a set of predictions, we should expect 80% of those predictions to be correct. However, this is far from reality since modern networks tend to be grossly over-confident. This is of great concern, particularly for mission-critical applications such as autonomous driving, medical diagnosis, wherein the downstream decision making not only rely on the predictions but also on their confidence.
In recent years, there has been a growing interest in developing methods for calibrating neural networks. These can be mainly divided into two categories (1) post-hoc approaches that perform calibration after training (2) methods that calibrate the model during training itself. The first includes methods such as Platt scaling Platt (1999), histogram binning Zadrozny & Elkan (2001), Isotonic regression Zadrozny & Elkan (2002), Bayesian binning and averaging Naeini et al. (2015); Naeini & Cooper (2016), and Spline fitting Gupta et al. (2021). Methods in the second category focus on training the model on an objective function that accounts for calibration as well, including Maximum Mean Calibration Error (MMCE) Kumar et al. (2018), Label smoothing Müller et al. (2019), and recently focal loss Mukhoti et al. (2020). These methods aim to produce inherently calibrated models which when combined with post training calibration methods lead to further improvements.
Contribution. Our work falls into the second category. We build upon the calibration properties of focal loss to propose a modification that further improves its performance. Firstly, we make the observation that while regular focal loss, with a fixed γ parameter, improves the overall calibration by preventing samples from being over-confident, it also leaves other samples under-confident. To
address this drawback, we propose a modification to the focal loss called AdaFocal that adjusts the γ for each training sample (or rather a group of samples) separately by taking into account the model’s under/over-confidence about a similar corresponding group in the validation set. We evaluate the performance of our method on four image classification tasks: CIFAR-10, CIFAR-100, Tiny-ImageNet and ImageNet, and one text classification task: 20 Newsgroup, using various model architectures, and show that AdaFocal substantially outperforms the regular focal loss and other state-of-the-art calibration techniques in the literature. We further study the performance of AdaFocal on an out-of-distribution detection task and find it to perform better than the competing methods. Finally, we find that the models trained using AdaFocal get innately calibrated to a level that most times do not significantly benefit from temperature scaling.
2 PROBLEM SETUP AND DEFINITIONS
Consider a classification setting where we are given a set of training data {(xn, ytrue,n)}, with xn ∈ X being the input and ytrue,i ∈ Y = {1, 2, . . . ,K} the associated ground-truth label. Using this data we wish to train a classifier fθ(x) that outputs a vector p̂ over theK classes. We also assume access to a validation set for hyper-parameter tuning and a test set for evaluating its performance. For example, fθ(·) can be a neural network with learnable parameters θ, x is an image, and p̂ is the output of a softmax layer whose kth element p̂k is the probability score for class k. We refer to ŷ = argmaxk∈Y p̂k as the network’s prediction and the associated probability score p̂ŷ as the predicted confidence, and the same quantity for the jth example is p̂ŷ,j .
In this setting, a network is said to be perfectly calibrated if the predicted confidence p̂ŷ reflects the true probability of the network classifying x correctly i.e. P(ŷ = ytrue | p̂ŷ = p) = p, ∀p ∈ [0, 1] Guo et al. (2017). Continuing our example, if the network assigns an average confidence score of 0.8 to a set of predictions then we should expect 80% of those to be correct. We define Calibration Error as E = p̂ŷ − P(ŷ = ytrue | p̂ŷ) and the Expected Calibration Error as Ep̂ŷ [E ] = Ep̂ŷ [ |p̂ŷ − P(ŷ = ytrue | p̂ŷ)| ] Guo et al. (2017). However, as the true calibration error cannot be computed empirically with a finite sized dataset, the following three approximations are generally used in the literature. That is, for a dataset {(xn, ytrue,n)}Nn=1, (1) ECE = ∑M i=1 |Bi| N |Ci − Ai| Guo et al. (2017), where Bi is equal-width bin that contains all examples j with p̂ŷ,j in the range [ iM , i+1 M ), Ci = 1 |Bi| ∑ j∈Bi p̂ŷ,j is the average confidence and Ai = 1 |Bi| ∑ j∈Bi 1(ŷj = ytrue,j) is the bin accuracy. Note that Ei = Ci − Ai is the empirical approximation of the calibration error E , (2) AdaECE = ∑M i=1 |Bi| N |Ci − Ai| Nguyen & O’Connor (2015), where ∀i, j |Bi| = |Bj | are adaptively sized (equal-mass) bins that contain an equal number of samples, and (3) ClasswiseECE Kumar et al. (2018); Kull et al. (2019) estimates the calibration over all K classes: ClasswiseECE = 1 K ∑M i=1 ∑K k=1 |Bi,k| N |Ci,k −Ai,k| where Ci,k = 1 |Bi,k| ∑ j∈Bi,k p̂k,j is the average confidence for
the kth class and Ai,k = 1|Bi,k| ∑ j∈Bi,k 1(ytrue,j = k) is the accuracy of the kth class in the ith bin.
Lastly, as ECE has been shown to be a biased estimate of true calibration Vaicenavicius et al. (2019), we additionally use two de-biased estimates of ECE namely ECEdebiased proposed in Kumar et al. (2019) and ECEsweep proposed in Roelofs et al. (2021) to further confirm our results.
3 CALIBRATION PROPERTIES OF FOCAL LOSS
Focal loss Lin et al. (2017) LFL(p) = −(1 − p)γ log p was originally proposed to improve the accuracy of classifiers by focusing on hard examples and down-weighting well classified examples. Recently it was further shown that focal loss may also result in significantly better calibrated models than cross entropy Mukhoti et al. (2020). This is because, based on the relation: LFL ≥ KL(q||p̂)− γH(p̂) where q is the one-hot target vector, focal loss while minimising the main KL divergence objective also increases the entropy of the prediction p̂. As a consequence this prevents the network from being overly confident on wrong predictions and overall improves calibration.
The regular focal loss with fixed γ, as we show in this section, does not achieve the best calibration. In Figure 1, we plot the calibration behaviour of ResNet50 in different bins when trained on CIFAR-10 with different focal losses. The ith bin’s calibration error subscripted by "val"Eval,i = Cval,i−Aval,i is computed on the validation set using 15 equal-mass binning. The figure shows the lowest (bin-0), a middle (bin-7) and highest bin (bin-7). For reference, the rest of the bins and their bin boundaries are
shown in Appendix B. From Figure 1 (a), we see that although focal loss γ = 4 achieves the overall lowest calibration error (AdaECE), there’s no single γ that performs the best across all the bins. For example, in bin-0 γ = 4, 5 seems to achieve better calibration whereas γ = 0, 3 are over-confident. For bin-7, on the other hand, γ = 3 seems to be better calibrated whereas γ = 4, 5 are under-confident and γ = 0 is over-confident.
This clearly indicates that using different γs for different bins can further improve the calibration. Such an attempt is presented in Mukhoti et al. (2020) called the Sample-Dependent Focal Loss (FLSD-53) which assigns γ = 5 if the training sample’s true class posterior p̂ytrue ∈ [0, 0.2) and γ = 3 if p̂ytrue ∈ [0.2, 1]. However, this strategy is fixed for every dataset-model pair and is based on simple heuristics of choosing higher γ for smaller values of p̂ytrue and relatively lower γ for higher values of p̂ytrue . However, from Figure 1(b), we see that FLSD-53 is also not the best strategy across all the bins. This, therefore, motivates the design of a γ selection strategy that can assign an appropriate γ for each bin based on the magnitude and sign of Eval,i. However, in order to design such a strategy we need solutions to the following two major challenges:
1. How do we find some correspondence between the "confidence of training samples", which we can manipulate during training by adjusting the entropy regularising parameter γ, and the "confidence of the validation samples", which we want to be actually manipulated but do not have direct control over? In other words, in order to indirectly control the confidence of a particular group of validation samples, how do we know which particular group of training samples’ confidence to be manipulated?
2. Given that there is a correspondence between a training group and a validation group (even if it’s loose), how do we arrive at the exact values of γ that will lead to better calibration?
We try to answer the first question in the next section and the answer to the second question leads to AdaFocal which is the main contribution of the paper.
4 CORRESPONDENCE BETWEEN CONFIDENCE OF TRAIN AND VAL. SAMPLES
In order to find some correspondence, an intuitive thing to do would be to group the validation samples into M equal-mass validation-bins, and then use these validation-bin boundaries to group the training samples as well. Then, we can compare the average confidence of the validation samples and the average confidence of the training samples, in the same validation-bin, to check for any correspondence.
Quantities of interest For binning validation samples, we always look at the confidence of the top predicted class ŷ denoted by p̂val,top (bin average: Cval,top). For training samples, on the other hand, instead of the confidence of the top predicted class ŷ denoted by p̂train,top (bin average: Ctrain,top), we will focus on the confidence of the true class ytrue denoted by p̂train,true (average: Ctrain,true) because during training we only care about p̂train,true which is manipulated through some loss function. For reference however, Figure 10 in Appendix C compares Ctrain,true and Ctrain,top to show that as the training set accuracy approaches 100%, the top predicted class and the true class for a training sample become the same. Henceforth, for a cleaner notation, we will always refer to Ctrain ≡ Ctrain,true and Cval ≡ Cval,top.
Common binning When training samples are grouped using the bin boundaries of the validationbins. In Figure 2(b), we compareCtrain,i in validation-bin-i 1 withCval,i in the same validation-bin-i, and find that there is indeed a good correspondence between the two quantities. For example in Figure 2(b), as γ increases from 0, 3 to 5, the solid-line (Ctrain,i) gets lower, and the same behaviour is observed on the starred-line (Cval,i) as well. For completeness, rest of the bins are shown in Figure 12 Appendix C. This is very encouraging as now we can expect (even though loosely) that if we increase/decrease the confidence of a group of training samples in some lower (or middle, or higher) probability region then the same will be reflected on a similar group of validation samples in lower (or middle, or higher) probability region. This therefore provides a way to indirectly control the value of Cval,i by manipulating Ctrain,i, and from a calibration point of view, our strategy going forward would be to exploit this correspondence to keep Ctrain,i (which we have control over during training) closer to Aval,i (the validation set accuracy in validation-bin-i) so that Cval,i also stays closer to Aval,i to overall reduce the calibration error Eval,i = Cval,i −Aval,i.
Independent binning Before proceeding, for completeness, we also look at the case when training samples and validation samples are grouped independently into their respective training-bins and validation-bins. Figure 2(a) compares Ctrain,i in training-bin-i with Cval,i in validation-bin-i. We observe a similar behaviour as mentioned above. Note that since the binning is independent, the boundaries of training-bin-imay not be exactly the same as that of validation-bin-i, however as shown in Figure 11 Appendix C (along with rest of the bins and their bin boundaries), they are quite close, meaning that a training group in lower (/middle/higher) probability region have good correspondence with the validation group in a similar nearby region.
Going forward, for the ease of algorithm design, we will simply stick to the case of "common binning" where training samples are grouped as per validation-bin boundaries. This will allows us to maintain a one-to-one correspondence between the boundaries of the ith training and validation group.
5 PROPOSED METHOD
Let’s denote the nth training sample’s true class posterior p̂ytrue by pn. Given that pn falls into validation-bin-b, our goal is to keep pn, or as per the discussion above its averaged equivalent Ctrain,b, closer to Aval,b so that the same is reflected on Cval,b. For manipulating pn, we will utilize the regularization effect that focal loss’s parameter γ has on the confidence of the predictions Mukhoti
1It may happen that no training sample belong to a particular validation-bin’s boundaries. In that case, Ctrain,i has been shown to drop to zero for example in bin-14 in Figure 2 (b).
et al. (2020). At this point, one can choose to update γb either based on (1) how far pn is from Aval,b i.e. γ = f(pn − Aval,b) or (2) how far Cval,b is from Aval,b i.e. γ = f(Cval,b − Aval,b). Such a γ-update-rule should ensure that whenever the model is over-confident, i.e. pn > Aval,b (or Cval,b > Aval,b), γ is increased so that the gradients get smaller which prevents pn from increasing further. On the other hand, when pn < Aval,b (or Cval,b < Aval,b), i.e. the model is under-confident, we decrease γ so as to get larger gradients that in turn will increase pn 2.
Based on this discussion, next we design and study a calibration-aware γ-update strategy called CalFocal, which with some additional modifications lead to AdaFocal.
5.1 CALIBRATION AWARE FOCAL LOSS (CALFOCAL)
Case 1: γ = f(pn −Aval,b) Treating Aval,b as the point that we want pn to not deviate from, we make the focal loss parameter γ a function of pn −Aval,b to get
LCalFocal(pn) = −(1− pn)γn log pn, with, γn = exp(λ(pn −Aval,b)), (1)
where, b is the validation-bin in which pn falls. The hyper-parameter λ is the scaling factor which combined with the exponential function helps to quickly ramp up/down γ. The exponential function adheres to the γ-update rule mentioned earlier and also ensures γ is > 0. Figure 3(a) plots LCalFocal vs. pn for Aval,b = 0.8. We see that based on the strength of λ, the loss drastically drops near pn = 0.8 and thereafter remains close to zero. This shows that LCalFocal aims is to first push p towards 0.8 and then slow its growth towards overconfidence. Next, in Figure 3(c), we find that CalFocal with λ = 10, 100 is able to reduce the calibration error compared to cross entropy but it is still far from FLSD-53’s performance. Also note in Figure 3(b) that too high λ (=100) affects the accuracy of model. Most importantly, Figure 3(d) compares Ctrain,i with Cval,i (and also Aval,i) for bin-0, where we find some evidence that the strategy of bringing pn or Ctrain,i (solid lines) closer to Aval,i (dashed lines) results in Cval,i (starred lines) getting closer to Aval,i as well, thus reducing the calibration error Eval,i = Cval,i −Aval,i slightly.
Case 2: γ = f(Cval,b − Aval,b) Note that Eq. 1 assigns a different γn for each training sample. To reduce computation and avoid using a different γn for each training sample, one can instead use a common γb for all the training samples that fall into the validation-bin-b by simply making it a function of Cval,b −Aval,b instead of pn −Aval,b.
LCalFocal(pn) = −(1− pn)γb log pn, with, γb = exp(λ(Cval,b −Aval,b)) (2)
where, b is the validation-bin in which pn falls. As shown in Appendix D, it’s performance is very similar (or slightly better than) CalFocal in Eq. 1. Further, it makes more sense to update γ based on how far Cval,b is from Aval,b instead of how far pn is from Aval,b because, as shown in Figure 3(d) bin-0, one may find Cval,b (starred lines) quite closer to Aval,b (dashed lines) even when pn or its
2Note that for focal loss increasing γ does not always lead to smaller gradients. This mostly holds true in the region pn approximately > 0.2 (see Figure 3(a) in Mukhoti et al. (2020)). However, in practice and as shown by the training-bin boundaries of bin-0 and bin-1 in Figure in Figure 11 Appendix C, we find majority of the training samples to lie above 0.2 during the majority of the training, and therefore, for the experiments in this paper, we simply stick to the rule of increasing γ to decrease gradients and stop pn from increasing and vice versa.
average equivalent Ctrain (solid lines) is far from Aval,b. At this point when Cval,b = Aval,b, we should stop updating γ further, even though pn −Aval,b 6= 0, as we have reached our goal of making Eval,b = Cval,b −Aval,b = 0. Therefore, we use Eq. 2 of Case 2 as base for AdaFocal.
Limitations of CalFocal: (1) Let’s say at some point of training, a high γb over the next few epochs reduces the calibration error Eval,b = Cval,b −Aval,b. Then, it is desirable to continue the training with the same high γb. However, note CalFocal’s update rule in Eq. 2 which will reduce γ → 1 as the Cval,b − Aval,b → 0. (2) At some point let’s say Cval,b − Aval,b is quite high. This will set γb to some high value as well depending on the hyper-parameter λ. Assuming this γb is still not high enough to bring down the confidence, we would want a way to further increase γb. However, CalFocal is incapable of doing so as it will continue to hold at γb = exp(λ(Cval,b − Aval,b)). By addressing these two issues in the next sub-section we present the final algorithm for AdaFocal.
5.2 CALIBRATION-AWARE ADAPTIVE FOCAL LOSS (ADAFOCAL)
A straightforward way to address the above limitations is to make γb,t depend on γb,t−1 i.e.
L(pn, t) = −(1− pn)γb,t log pn, with, γb,t = γb,t−1 ∗ exp(Cval,b −Aval,b). (3) This update rule address the limitations of CalFocal in the following way. Let’s say at some point we observe over-confidence i.e. Eval,b = Cval,b − Aval,b > 0. Then, in the next step γb will be increased. In the subsequent steps, it will continue to increase unless the calibration error Eval,b starts decreasing (this additional increase in γ was not possible with CalFocal). At this point, if we find Eval,b to start decreasing, that would reduce the increase in γb over the next epochs and γb will ultimately settle down to a value when Eval,b = 0 (CalFocal at Eval,b = 0 will cause γ to go down to 1). Next, if this current value of γb starts causing under-confidence i.e. Cval,b −Aval,b < 0, then the update rule will kick in to reduce γ thus allowing Cval,b to be increased back to Aval,b. This oscillating behaviour of AdaFocal around the desired point of Cval,b = Aval,b is its main adavantage in reducing calibration error in every bin. Additionally, also note the absence of the hyper-parameter λ in the exponent of Eq. 3 which makes AdaFocal hyper-parameter free.
Finally, note an undesirable property of Eq. 3 which is the unbounded exponential update. This may easily cause γt to explode as it can be expanded as γt = γt−1 exp(Eval,t) = γ0 exp(Eval,0 + Eval,1 + ... + Eval,t−1 + Eval,t). Thus if Eval,t > 0 for quite a few number of epochs, γt will become so large that even if Eval,t < 0 in the subsequent epochs, it may not decrease to a desired level. We remedy this by simply constraining γt to an upper bound γmax to get the AdaFocal loss as
LAdaFocal(pn, t) = −(1− pn)γb,t log pn, with, γb,t = min{γmax, γb,t−1 ∗ eCval,b−Aval,b} (4) An algorithmic description of training with AdaFocal (or CalFocal) is given in Algorithm 1. Limitation: One may argue that γmax is again a hyper-parameter; however, note that it does not require any special fine-tuning. Its sole purpose is to stop γ from exploding and any reasonable value around 20 works quite well in practice. For all our experiments, we use γmax = 20. For comparison of AdaFocal with γmax = 20, γmax = 50 and unconstrained γmax =∞, please refer to Appendix L.
6 EXPERIMENTS
Experimental setup We evaluate the performance of our proposed method on image and text classification tasks. For image classification, we use CIFAR-10, CIFAR-100 Krizhevsky (2009), Tiny-ImageNet Deng et al. (2009), and ImageNet Russakovsky et al. (2015) to analyze the calibration of ResNet50, ResNet-100 He et al. (2016), Wide-ResNet-26-10 Zagoruyko & Komodakis (2016), and DenseNet-121 Huang et al. (2017) models. For text classification, we use the 20 Newsgroup dataset Lang (1995) and train the Global Pooling CNN model Lin et al. (2014). Further details about the datasets, models and experimental configurations are given in Appendix E.
Baseline As baseline calibration methods we use MMCE Kumar et al. (2018), Brier loss Brier (1950), Label smoothing Müller et al. (2019) and sample-dependent focal loss FLSD-53. We also report the effect of temperature scaling Guo et al. (2017) on top of these calibration methods. Following Mukhoti et al. (2020), we select the optimal temperature that produces the minimum ECE on the validation set by searching in the interval (0, 10] with step size of 0.1.
Results. In Figure 4, we compare AdaFocal against cross entropy (CE) and FLSD-53 for ResNet-50 trained on various small to large-scale image datasets. We chose FLSD-53 as our competitive baseline
as it was shown to be consistently better than MMCE, Brier Loss and Label smoothing Mukhoti et al. (2020) across many datasets-model pairs. The figure plots the test set error and ECE calibration metric. In Figure 5, for ResNet-50 on CIFAR-10 and ImageNet, we plot (1) the calibration statistics Eval = Cval − Aval of the validation set and (2) the dynamics of associated γt used by AdaFocal during the training for a few bins covering lower, middle, and higher probability regions.
From these figures, we first observe that for CIFAR-10, CIFAR-100 and Tiny-ImageNet, FLDS-53 is much better calibrated than CE. This is because, as shown in Figure 5(a) for ResNet-50 and CIFAR-10, CE is over-confident compared to FLSD-53 in every bin. For ImageNet, however, the behaviour is reversed: FLSD-53 is poorly calibrated than CE. The reason, as shown in Figure 5(b), is that due to the use of high values γ = 5, 3, FLSD-53 makes the model largely under-confident in each bin, leading to an overall high calibration error. This shows that FLSD-53 is a strategy based on heuristic (from a limited number of dataset-model pairs) that does not generalize well. AdaFocal, on the other hand, is well calibrated for all the four dataset-model pairs while achieving similar accuracy.
The dynamics/evolution of γt during training for different bins is shown in Figure 5: (1) for CIFAR10, we find γt to be closer to 1 for higher bins and closer to 20 for lower bin. These γs found by AdaFocal result in better calibration than γ = 5, 3 of FLSD-53. (2) for ImageNet, we find AdaFocal’s
γ → 0. This makes sense because for ImageNet, from Figure 4(d), cross entropy (i.e. γ = 0 for every bin) is much better calibrated than FLSD-53 and AdaFocal (starting from γ = 1) also ultimately settles down to CE (γ = 0) to achieve a similar level of calibration. This confirms that during training, unlike CE or FLSD-53, AdaFocal being aware of the network’s current under/over-confidence is able to guide the γs to the values that maintain a well calibrated model at every step. Also note that for an unseen dataset-model pair there’s no way to know beforehand which γ will perform better but these empirical evidence show that AdaFocal will automatically find those appropriate γs.
Rest of the experiments are shown in Table 1 (ECE) and Table 2 (Error)3. From Table 1, we observe that prior to temperature scaling AdaFocal outperforms the baseline methods by a substantial margin in 9 out of 11 cases. With post-temperature scaling included, AdaFocal achieves the lowest calibration error in 7 out of the 11 experiments. Further, observe that in many cases temperature scaling on top of AdaFocal does not offer any improvement (optimal temperature = 1). For the rest, the optimal temperature is close to 1 indicating that AdaFocal produces innately calibrated models during training itself. The consistency of AdaFocal across other calibration metrics is shown through AdaECE and classwise-ECE in Appendix F. ECEdebias (15 and 30 bins), ECEEW−sweep (equal-width), and ECEEM−sweep (equal-mass) are reported in Appendix G. Significance of the results is confirmed through ECE error bars with mean and standard deviations computed over 5 runs in Appendix H.
Number of bins The ECE metrics in the paper are reported using 15 bins. For AdaFocal training we experiment with 5, 10, 15, 20, 30, and 50 equal-mass (adaptive) binning when drawing calibration statistics form the validation set as reported in Appendix I. We find the best results to be from the range 10 to 20. Performance degrades when the number of bins are too small (< 10) or too large (> 20), therefore, for the AdaFocal training in the paper we use 15 bins as well.
Out-of-Distribution (OOD) detection. Following Mukhoti et al. (2020), we report the performance of AdaFocal on an OOD detection task. We train ResNet-110 and Wide-ResNet26-10 on
3While reproducing the baseline experiments in Mukhoti et al. (2020) we obtained very similar results, therefore, we simply borrow the exact values to maintain consistent comparison.
Dataset Model Cross Entropy Brier Loss MMCE LS-0.05 FLSD-53 AdaFocal
CIFAR-10 as the in-distribution data and test on SVHN Netzer et al. (2011) and CIFAR-10-C Hendrycks & Dietterich (2019) (with level 5 Gaussian noise corruption) as OOD data. Using entropy of the softmax as the measure of uncertainty, the corresponding ROC plots are shown in Figure 6 and AUROC scores are reported in Table 10 in Appendix J. We see that models trained with AdaFocal outperform focal loss γ = 3 (FL-3) and FLSD-53. For the exact AUROC scores, please refer to Appendix J. These results further highlight the benefit of an inherently calibrated model produced using AdaFocal as post-hoc techniques such as temperature scaling, as shown in the figure, is ineffective under distributional shift Snoek et al. (2019).
7 CONCLUSION
In this work, we first revisit the calibration properties of regular focal loss and highlight the downside of using a fixed γ for all samples. Particularly, by studying the calibration behaviour of different samples in different probability region, we find that there’s no single γ that achieves the best calibration over the entire region. We use this observation to motivate the selection of γ independently for each sample (or group of samples) based on the knowledge of network’s under/over-confidence. We propose a calibration-aware adaptive focal loss called AdaFocal that accounts for such information and updates the γt at every step based on γt−1 from the previous step and the magnitude of network’s under/over-confidence. We find AdaFocal to perform consistently better across different datasetmodel pairs producing innately calibrated models that most times do not substantially benefit from post-hoc processing of temperature scaling. Additionally, we find models trained with AdaFocal to be significantly better in out-of-distribution detection task.
Reproducibility For reproducibility, we have include in the supplementary material a zip file that contains the code base for running the experiments. For running particular experiments
• CIFAR-10, ResNet-50, Cross entropy: python train.py –dataset cifar10 –model resnet50 –loss cross_entropy –num_bins 15 -e 400 –save-path experiments/cifar10_resnet50_ce
• CIFAR-100, ResNet-50, Cross entropy: python train.py –dataset cifar100 –model resnet50 –loss cross_entropy –num_bins 15 -e 400 –save-path experiments/cifar100_resnet50_ce
• Tiny-ImageNet, ResNet-50, Cross entropy: python train.py –dataset tiny_imagenet –model resnet50_ti –loss cross_entropy –num_bins 15 –first-milestone 40 –secondmilestone 60 -e 100 -b 64 -tb 64 –dataset-root data/tiny-imagenet-200 –save-path experiments/tinyImageNet_resnet50_ce
• 20 Newgroups, CNN, Cross entropy: python main.py –loss cross_entropy –num-epochs 50 –num-bins 15 –save-path experiments/cnn_ce
APPENDICES
A ADAFOCAL’S GENERALIZATION TO LARGE SCALE DATASET (IMAGENET)
For ImageNet, FLSD-53 seems to perform very poorly in terms of calibration. The reason is that due to higher values of γ = 5, 3 FLSD-53 becomes extremely under-confident in each bin leading to a high calibration error. AdaFocal, on the hand, remains well calibrated which confirms that during training, unlike CE or FLSD-53, AdaFocal being aware of the network’s current under/overconfidence (through the validation set) is able to adjusts the γs in a way that maintains a well calibrated model at every step. Further, in Figure 8, note the dynamics/evolution of γt in different bins. For ImageNet, we find AdaFocal’s γ → 0 which makes sense because, from Figure 7, cross entropy (i.e. γ = 0 for every bin) is much better calibrated than FLSD-53 and AdaFocal (starting from γ = 1) settles down as CE (γ = 0). Note that for an unseen dataset-model pair it is not possible to know beforehand whether CE or focal loss will perform better. However, from these experiments, we find strong evidence that, for any dataset-model pair, AdaFocal will lead to the γs that result in the best calibration.
B CALIBRATION BEHAVIOUR OF FOCAL LOSS IN DIFFERENT BINS
In the main paper, we showed the calibration behavior of different focal losses for ResNet50 trained on CIFAR-10 for only a few bins. For completeness, the rest of the bins and their calibration error Ei = Cval,i − Aval,i are shown in Figure 9 for focal losses with γ = 0, 3, 4, 5. We observe that there’s no single γ that performs the best across all the bins. Rather, every bin seems to have a particular γ that achieves the best calibration.
C CORRESPONDENCE BETWEEN CONFIDENCE OF TRAINING AND VALIDATION SAMPLES
D CALFOCAL LOSS
E DATASETS AND EXPERIMENTS
E.1 DATASET DESCRIPTION
CIFAR-10 Krizhevsky (2009): This dataset contains 60, 000 coloured images of size 32 × 32, which are equally divided into 10 classes. A split of 45, 000/5, 000/10, 000 images is used as train/validation/test sets respectively.
CIFAR-100 Krizhevsky (2009): This dataset contains 60, 000 coloured images of size 32 × 32, which are equally divided into 100 classes. A split of 45, 000/5, 000/10, 000 images is used as train/validation/test sets respectively.
ImageNet Russakovsky et al. (2015): ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2012-2017 is an image classification and localization dataset. This dataset spans 1000 object classes and contains 1,281,167 training images and 50,000 validation images.
Tiny-ImageNet Deng et al. (2009): It is a subset of the ImageNet dataset with 64× 64 dimensional images and 200 classes. It has 500 images per class in the training set and 50 images per class in the validation set.
20 Newsgroups Lang (1995): This dataset contains 20, 000 news articles, categorised evenly into 20 different newsgroups. Some of the newsgroups are very closely related to each other (e.g. comp.sys.ibm.pc.hardware / comp.sys.mac.hardware), while others are highly unrelated (e.g misc.forsale / soc.religion.christian). We use a train/validation/test split of 15, 098/900/3, 999 documents.
E.2 EXPERIMENTAL DETAILS
For all our experiments, we have used Nvidia Titan X Pascal GPU with 12GB memory.
CIFAR-10 and CIFAR-100: We use SGD with a momentum of 0.9 as our optimiser, and train the networks for 350 epochs, with a learning rate of 0.1 for the first 150 epochs, 0.01 for the next 100 epochs, and 0.001 for the last 100 epochs. We use a training batch size of 128. The training data is augmented by applying random crops and random horizontal flips.
Tiny-ImageNet: We use SGD with a momentum of 0.9 as our optimiser, and train the models for 100 epochs with a learning rate of 0.1 for the first 40 epochs, 0.01 for the next 20 epochs and 0.001 for the last 40 epochs. We use a training batch size of 64. Note that we use 50 samples per class (i.e. a total of 10000 samples) from the training set as the validation set. Hence, the training is only on 90000 images. We use the Tiny-ImageNet validation set as our test set.
ImageNet: We use SGD as our optimiser with momentum of 0.9 and weight decay 10−4, and train the models for 90 epochs with a learning rate of 0.01 for the first 30 epochs, 0.001 for the next 30 epochs and 0.0001 for the last 30 epochs. We use a training batch size of 128. We divide the 50,000 validation images into validation and test set of 25,000 images each.
20 Newsgroups: We train the Global Pooling CNN Network Lin et al. (2014) using the Adam optimiser, with learning rate 0.001, and default betas 0.9 and 0.999. We used Glove word embeddings
Pennington et al. (2014) to train the network. We train the model for 50 epochs and use the model at the end to evaluate the performance.
All our experiments are implemented in PyTorch. The hyperparameters that are not explicitly mentioned above are set to their default values. For CIFAR-10/100 and Tiny-ImageNet, AdaFocal is implemented on top of the base code available from Mukhoti (2020). The code for 20 Newsgroups is implemented in PyTorch by adapting the code (TensorFlow) available from Kumar (2018).
The experimental results in the paper are reported for the model at the end of (1) CIFAR-10/100: 350 epochs, (2) Tiny-ImageNet: 100 epochs, (3) ImageNet: 90 epochs, and (4) 20 NewsGroups: 50 epochs.
F ADAECE AND CLASSWISE-ECE PERFORMANCE
Here, we compare the performance of AdaFocal against the baseline methods in terms of AdaECE and classwise-ECE in Table 3 and 4 respectively. For CIFAR-10/100, the values are reported for the model at the end of 350 epochs; for Tiny-ImageNet, at the end of 100 epochs; and for 20 NewsGroup dataset, at the end of 50 epochs. From these tables, we observe that AdaFocal outperforms all the baseline methods by a substantial margin, especially if we compare the pre-temperature scaling results.
Dataset Model Cross Entropy Brier Loss MMCE LS-0.05 FLSD-53 AdaFocal
Dataset Model Cross Entropy Brier Loss MMCE LS-0.05 FLSD-53 AdaFocal
Dataset Model Cross Entropy FLSD-53 AdaFocal
Dataset Model Cross Entropy FLSD-53 AdaFocal
Dataset Model Cross Entropy FLSD-53 AdaFocal
G DEBISED ESTIMATES OF ECE
Dataset Model Cross Entropy FLSD-53 AdaFocal
H ECE ERROR BARS
I NUMBER OF BINS USED DURING ADAFOCAL TRAINING
Experiment details: CIFAR-10, ResNet50 trained for 350 epochs. The reported results below are without temperature scaling. Our method AdaFocal with 5, 10, 15, 20, 30, and 50 adaptive (equal mass) bins vs FLSD-53. Note here that there are two types of binning:
J AUROC FOR OUT-OF-DISTRIBUTION DETECTION
For ResNet110 on CIFAR-10/SVHN, we were not able to reproduce the reported results of 96.74, 96.92 for FL-3 in Mukhoti et al. (2020). Instead we found those values to be 90.27, 90.39 and report them in Table 10
K MOVING AVERAGE γ-UPDATE RULE
For the focal loss in the paper L(pn, t) = −(1− pn)γb,t log pn, the unconstrained γ-update rule for AdaFocal is given by
γt+1 = γt ∗ exp(Cval,t+1 −Aval,t+1) (5) = γt ∗ exp(Eval,t+1) (6)
If instead we use exponential moving average to update γ, then the update rule (let’s call it MA-α) is given by
γt+1 = (γt) α ∗ ( eEval,t+1 )1−α (7)
= γαt−1 ∗ eαEval,t ∗ e(1−α)Eval,t+1 (8) = γαt−1 ∗ e[αEval,t+(1−α)Eval,t+1] (9)
The evolution or dynamics of γ is given in Figure 16.
L MULTIPLE RUNS OF ADAFOCAL WITH DIFFERENT γmax
Due to the stochastic nature of the experiments, AdaFocal γs may end up following different trajectories across different runs (initialization), which in turn might lead to variations in the final results. In this section, we look at the extent of such variations for (1) unconstrained γ (2) γ capped by γmax = 20 and (3) γ capped by γmax = 50. We study this for ResNet-50 trained multiple times on CIFAR-10 starting with a different random seed.
L.1 ADAFOCAL, γmax = 20
In Figure 17, we observe that AdaFocal with γmax = 20 is consistently (9 out of 9 times) better than FLSD-53. Figure 18 shows the evolution of γ across different runs.
L.2 ADAFOCAL, γmax = 50
In Figure 19, we observe that AdaFocal with γmax = 50 has more variability than AdaFocal γmax = 20 but is mostly better than FLSD-53. Figure 19 shows the evolution of γ across different runs.
L.3 ADAFOCAL, UNCONSTRAINED γ
In Figure 21, we observe that AdaFocal with unconstrained γ does exhibit some variability across different runs: 7 out of 9 times it performs better than FLSD-53 whereas the other two times it is similar or slightly worse.
The above behaviour is mostly due to the variations in the trajectory of γs for lower bins, as shown in Figure 22. For higher bins, we see the γs settling to similar values, however for lower bins, as the γs are unconstrained they blow up to very high values.
M ERROR, ECE AND BIN STATISTICS PLOTS FOR REST OF THE EXPERIMENTS | 1. What is the main contribution of the paper, and how does it aim to improve the calibration of neural networks?
2. What are the strengths and weaknesses of the proposed approach, particularly in comparison to other methods?
3. How does the reviewer assess the clarity and quality of the paper's content, including the organization, writing, and visualizations?
4. What are some potential interpretations of the observed trade-offs between calibration and accuracy in the empirical studies? | Summary Of The Paper
Review | Summary Of The Paper
The paper propose a new sets of algorithms to modify \gamma in the focal loss, where a tunable \gamma is applied at different region of model predications. Empirically study on Cifar-10 and Cifar-100 data with various of benchmark networks show better ECE as compared to the other methods.
Review
The idea of the paper is to optimize \gamma tuning in focal loss to improve the calibration of neural networks model. The motivation makes a lot of sense. Although the novelty of the paper seems limited and idea seems adhoc, while practically the proposed approach could be helpful to the loss function and calibration study. Some concerns to this paper are
[1] The paper seems written in a rush and the paper organization and writing is weak.
In abstract, FLSD-53 is ambiguous to in its abbreviation when first used
The plots in the paper mostly have tiny legends
[2] In table2, it seems AdaFocal improved calibration but not accuracy in Cifar-10 data, while in the opposite way on Cifar-100 data, as compared to FLSD-53. Are there any interpretations about the such calibration and accuracy trade-offs? |
ICLR | Title
AdaFocal: Calibration-aware Adaptive Focal Loss
Abstract
Much recent work has been devoted to the problem of ensuring that a neural network’s confidence scores match the true probability of being correct, i.e. the calibration problem. Of note, it was found that training with focal loss leads to better calibrated deep networks than cross-entropy, while achieving the same level of accuracy Mukhoti et al. (2020). This success stems from focal loss regularizing the entropy of the network’s prediction (controlled by the hyper-parameter γ), thereby reining in the network’s overconfidence. Further improvement is expected if γ is selected independently for each training sample. However, the proposed Sample-Dependent Focal Loss (FLSD) in Mukhoti et al. (2020) is based on simple heuristics that does not take into account the difference in the network’s calibration behaviour for different samples (or groups of samples). As a result it is only slightly better than focal loss with fixed γ. In this paper, we propose a calibration-aware version of FLSD, called AdaFocal, which, at every training step t, adaptively modifies the γ for individual group of samples based on (1) γt−1 from the previous training step (2) the magnitude of the network’s under/over-confidence for those groups. We evaluate our method on various small to large-scale image recognition tasks and one NLP task, covering a variety of network architectures, to confirm that AdaFocal consistently achieves improved calibration without a significant loss in accuracy. Further, the models trained with AdaFocal are shown to have significantly improved Out-of-Distribution (OOD) detection capability.
1 INTRODUCTION
Neural networks have found tremendous success in almost every field including computer vision, natural language processing, and speech recognition. Over time, these networks have grown complex and larger in size to achieve state-of-the-art performance and they continue to evolve further in that direction. However, it has been well established that such high capacity networks suffer from poor calibration Guo et al. (2017), i.e. the confidence scores of the predictions do not reflect the real world probabilities of those predictions being true. For example, if the network assigns 0.8 confidence to a set of predictions, we should expect 80% of those predictions to be correct. However, this is far from reality since modern networks tend to be grossly over-confident. This is of great concern, particularly for mission-critical applications such as autonomous driving, medical diagnosis, wherein the downstream decision making not only rely on the predictions but also on their confidence.
In recent years, there has been a growing interest in developing methods for calibrating neural networks. These can be mainly divided into two categories (1) post-hoc approaches that perform calibration after training (2) methods that calibrate the model during training itself. The first includes methods such as Platt scaling Platt (1999), histogram binning Zadrozny & Elkan (2001), Isotonic regression Zadrozny & Elkan (2002), Bayesian binning and averaging Naeini et al. (2015); Naeini & Cooper (2016), and Spline fitting Gupta et al. (2021). Methods in the second category focus on training the model on an objective function that accounts for calibration as well, including Maximum Mean Calibration Error (MMCE) Kumar et al. (2018), Label smoothing Müller et al. (2019), and recently focal loss Mukhoti et al. (2020). These methods aim to produce inherently calibrated models which when combined with post training calibration methods lead to further improvements.
Contribution. Our work falls into the second category. We build upon the calibration properties of focal loss to propose a modification that further improves its performance. Firstly, we make the observation that while regular focal loss, with a fixed γ parameter, improves the overall calibration by preventing samples from being over-confident, it also leaves other samples under-confident. To
address this drawback, we propose a modification to the focal loss called AdaFocal that adjusts the γ for each training sample (or rather a group of samples) separately by taking into account the model’s under/over-confidence about a similar corresponding group in the validation set. We evaluate the performance of our method on four image classification tasks: CIFAR-10, CIFAR-100, Tiny-ImageNet and ImageNet, and one text classification task: 20 Newsgroup, using various model architectures, and show that AdaFocal substantially outperforms the regular focal loss and other state-of-the-art calibration techniques in the literature. We further study the performance of AdaFocal on an out-of-distribution detection task and find it to perform better than the competing methods. Finally, we find that the models trained using AdaFocal get innately calibrated to a level that most times do not significantly benefit from temperature scaling.
2 PROBLEM SETUP AND DEFINITIONS
Consider a classification setting where we are given a set of training data {(xn, ytrue,n)}, with xn ∈ X being the input and ytrue,i ∈ Y = {1, 2, . . . ,K} the associated ground-truth label. Using this data we wish to train a classifier fθ(x) that outputs a vector p̂ over theK classes. We also assume access to a validation set for hyper-parameter tuning and a test set for evaluating its performance. For example, fθ(·) can be a neural network with learnable parameters θ, x is an image, and p̂ is the output of a softmax layer whose kth element p̂k is the probability score for class k. We refer to ŷ = argmaxk∈Y p̂k as the network’s prediction and the associated probability score p̂ŷ as the predicted confidence, and the same quantity for the jth example is p̂ŷ,j .
In this setting, a network is said to be perfectly calibrated if the predicted confidence p̂ŷ reflects the true probability of the network classifying x correctly i.e. P(ŷ = ytrue | p̂ŷ = p) = p, ∀p ∈ [0, 1] Guo et al. (2017). Continuing our example, if the network assigns an average confidence score of 0.8 to a set of predictions then we should expect 80% of those to be correct. We define Calibration Error as E = p̂ŷ − P(ŷ = ytrue | p̂ŷ) and the Expected Calibration Error as Ep̂ŷ [E ] = Ep̂ŷ [ |p̂ŷ − P(ŷ = ytrue | p̂ŷ)| ] Guo et al. (2017). However, as the true calibration error cannot be computed empirically with a finite sized dataset, the following three approximations are generally used in the literature. That is, for a dataset {(xn, ytrue,n)}Nn=1, (1) ECE = ∑M i=1 |Bi| N |Ci − Ai| Guo et al. (2017), where Bi is equal-width bin that contains all examples j with p̂ŷ,j in the range [ iM , i+1 M ), Ci = 1 |Bi| ∑ j∈Bi p̂ŷ,j is the average confidence and Ai = 1 |Bi| ∑ j∈Bi 1(ŷj = ytrue,j) is the bin accuracy. Note that Ei = Ci − Ai is the empirical approximation of the calibration error E , (2) AdaECE = ∑M i=1 |Bi| N |Ci − Ai| Nguyen & O’Connor (2015), where ∀i, j |Bi| = |Bj | are adaptively sized (equal-mass) bins that contain an equal number of samples, and (3) ClasswiseECE Kumar et al. (2018); Kull et al. (2019) estimates the calibration over all K classes: ClasswiseECE = 1 K ∑M i=1 ∑K k=1 |Bi,k| N |Ci,k −Ai,k| where Ci,k = 1 |Bi,k| ∑ j∈Bi,k p̂k,j is the average confidence for
the kth class and Ai,k = 1|Bi,k| ∑ j∈Bi,k 1(ytrue,j = k) is the accuracy of the kth class in the ith bin.
Lastly, as ECE has been shown to be a biased estimate of true calibration Vaicenavicius et al. (2019), we additionally use two de-biased estimates of ECE namely ECEdebiased proposed in Kumar et al. (2019) and ECEsweep proposed in Roelofs et al. (2021) to further confirm our results.
3 CALIBRATION PROPERTIES OF FOCAL LOSS
Focal loss Lin et al. (2017) LFL(p) = −(1 − p)γ log p was originally proposed to improve the accuracy of classifiers by focusing on hard examples and down-weighting well classified examples. Recently it was further shown that focal loss may also result in significantly better calibrated models than cross entropy Mukhoti et al. (2020). This is because, based on the relation: LFL ≥ KL(q||p̂)− γH(p̂) where q is the one-hot target vector, focal loss while minimising the main KL divergence objective also increases the entropy of the prediction p̂. As a consequence this prevents the network from being overly confident on wrong predictions and overall improves calibration.
The regular focal loss with fixed γ, as we show in this section, does not achieve the best calibration. In Figure 1, we plot the calibration behaviour of ResNet50 in different bins when trained on CIFAR-10 with different focal losses. The ith bin’s calibration error subscripted by "val"Eval,i = Cval,i−Aval,i is computed on the validation set using 15 equal-mass binning. The figure shows the lowest (bin-0), a middle (bin-7) and highest bin (bin-7). For reference, the rest of the bins and their bin boundaries are
shown in Appendix B. From Figure 1 (a), we see that although focal loss γ = 4 achieves the overall lowest calibration error (AdaECE), there’s no single γ that performs the best across all the bins. For example, in bin-0 γ = 4, 5 seems to achieve better calibration whereas γ = 0, 3 are over-confident. For bin-7, on the other hand, γ = 3 seems to be better calibrated whereas γ = 4, 5 are under-confident and γ = 0 is over-confident.
This clearly indicates that using different γs for different bins can further improve the calibration. Such an attempt is presented in Mukhoti et al. (2020) called the Sample-Dependent Focal Loss (FLSD-53) which assigns γ = 5 if the training sample’s true class posterior p̂ytrue ∈ [0, 0.2) and γ = 3 if p̂ytrue ∈ [0.2, 1]. However, this strategy is fixed for every dataset-model pair and is based on simple heuristics of choosing higher γ for smaller values of p̂ytrue and relatively lower γ for higher values of p̂ytrue . However, from Figure 1(b), we see that FLSD-53 is also not the best strategy across all the bins. This, therefore, motivates the design of a γ selection strategy that can assign an appropriate γ for each bin based on the magnitude and sign of Eval,i. However, in order to design such a strategy we need solutions to the following two major challenges:
1. How do we find some correspondence between the "confidence of training samples", which we can manipulate during training by adjusting the entropy regularising parameter γ, and the "confidence of the validation samples", which we want to be actually manipulated but do not have direct control over? In other words, in order to indirectly control the confidence of a particular group of validation samples, how do we know which particular group of training samples’ confidence to be manipulated?
2. Given that there is a correspondence between a training group and a validation group (even if it’s loose), how do we arrive at the exact values of γ that will lead to better calibration?
We try to answer the first question in the next section and the answer to the second question leads to AdaFocal which is the main contribution of the paper.
4 CORRESPONDENCE BETWEEN CONFIDENCE OF TRAIN AND VAL. SAMPLES
In order to find some correspondence, an intuitive thing to do would be to group the validation samples into M equal-mass validation-bins, and then use these validation-bin boundaries to group the training samples as well. Then, we can compare the average confidence of the validation samples and the average confidence of the training samples, in the same validation-bin, to check for any correspondence.
Quantities of interest For binning validation samples, we always look at the confidence of the top predicted class ŷ denoted by p̂val,top (bin average: Cval,top). For training samples, on the other hand, instead of the confidence of the top predicted class ŷ denoted by p̂train,top (bin average: Ctrain,top), we will focus on the confidence of the true class ytrue denoted by p̂train,true (average: Ctrain,true) because during training we only care about p̂train,true which is manipulated through some loss function. For reference however, Figure 10 in Appendix C compares Ctrain,true and Ctrain,top to show that as the training set accuracy approaches 100%, the top predicted class and the true class for a training sample become the same. Henceforth, for a cleaner notation, we will always refer to Ctrain ≡ Ctrain,true and Cval ≡ Cval,top.
Common binning When training samples are grouped using the bin boundaries of the validationbins. In Figure 2(b), we compareCtrain,i in validation-bin-i 1 withCval,i in the same validation-bin-i, and find that there is indeed a good correspondence between the two quantities. For example in Figure 2(b), as γ increases from 0, 3 to 5, the solid-line (Ctrain,i) gets lower, and the same behaviour is observed on the starred-line (Cval,i) as well. For completeness, rest of the bins are shown in Figure 12 Appendix C. This is very encouraging as now we can expect (even though loosely) that if we increase/decrease the confidence of a group of training samples in some lower (or middle, or higher) probability region then the same will be reflected on a similar group of validation samples in lower (or middle, or higher) probability region. This therefore provides a way to indirectly control the value of Cval,i by manipulating Ctrain,i, and from a calibration point of view, our strategy going forward would be to exploit this correspondence to keep Ctrain,i (which we have control over during training) closer to Aval,i (the validation set accuracy in validation-bin-i) so that Cval,i also stays closer to Aval,i to overall reduce the calibration error Eval,i = Cval,i −Aval,i.
Independent binning Before proceeding, for completeness, we also look at the case when training samples and validation samples are grouped independently into their respective training-bins and validation-bins. Figure 2(a) compares Ctrain,i in training-bin-i with Cval,i in validation-bin-i. We observe a similar behaviour as mentioned above. Note that since the binning is independent, the boundaries of training-bin-imay not be exactly the same as that of validation-bin-i, however as shown in Figure 11 Appendix C (along with rest of the bins and their bin boundaries), they are quite close, meaning that a training group in lower (/middle/higher) probability region have good correspondence with the validation group in a similar nearby region.
Going forward, for the ease of algorithm design, we will simply stick to the case of "common binning" where training samples are grouped as per validation-bin boundaries. This will allows us to maintain a one-to-one correspondence between the boundaries of the ith training and validation group.
5 PROPOSED METHOD
Let’s denote the nth training sample’s true class posterior p̂ytrue by pn. Given that pn falls into validation-bin-b, our goal is to keep pn, or as per the discussion above its averaged equivalent Ctrain,b, closer to Aval,b so that the same is reflected on Cval,b. For manipulating pn, we will utilize the regularization effect that focal loss’s parameter γ has on the confidence of the predictions Mukhoti
1It may happen that no training sample belong to a particular validation-bin’s boundaries. In that case, Ctrain,i has been shown to drop to zero for example in bin-14 in Figure 2 (b).
et al. (2020). At this point, one can choose to update γb either based on (1) how far pn is from Aval,b i.e. γ = f(pn − Aval,b) or (2) how far Cval,b is from Aval,b i.e. γ = f(Cval,b − Aval,b). Such a γ-update-rule should ensure that whenever the model is over-confident, i.e. pn > Aval,b (or Cval,b > Aval,b), γ is increased so that the gradients get smaller which prevents pn from increasing further. On the other hand, when pn < Aval,b (or Cval,b < Aval,b), i.e. the model is under-confident, we decrease γ so as to get larger gradients that in turn will increase pn 2.
Based on this discussion, next we design and study a calibration-aware γ-update strategy called CalFocal, which with some additional modifications lead to AdaFocal.
5.1 CALIBRATION AWARE FOCAL LOSS (CALFOCAL)
Case 1: γ = f(pn −Aval,b) Treating Aval,b as the point that we want pn to not deviate from, we make the focal loss parameter γ a function of pn −Aval,b to get
LCalFocal(pn) = −(1− pn)γn log pn, with, γn = exp(λ(pn −Aval,b)), (1)
where, b is the validation-bin in which pn falls. The hyper-parameter λ is the scaling factor which combined with the exponential function helps to quickly ramp up/down γ. The exponential function adheres to the γ-update rule mentioned earlier and also ensures γ is > 0. Figure 3(a) plots LCalFocal vs. pn for Aval,b = 0.8. We see that based on the strength of λ, the loss drastically drops near pn = 0.8 and thereafter remains close to zero. This shows that LCalFocal aims is to first push p towards 0.8 and then slow its growth towards overconfidence. Next, in Figure 3(c), we find that CalFocal with λ = 10, 100 is able to reduce the calibration error compared to cross entropy but it is still far from FLSD-53’s performance. Also note in Figure 3(b) that too high λ (=100) affects the accuracy of model. Most importantly, Figure 3(d) compares Ctrain,i with Cval,i (and also Aval,i) for bin-0, where we find some evidence that the strategy of bringing pn or Ctrain,i (solid lines) closer to Aval,i (dashed lines) results in Cval,i (starred lines) getting closer to Aval,i as well, thus reducing the calibration error Eval,i = Cval,i −Aval,i slightly.
Case 2: γ = f(Cval,b − Aval,b) Note that Eq. 1 assigns a different γn for each training sample. To reduce computation and avoid using a different γn for each training sample, one can instead use a common γb for all the training samples that fall into the validation-bin-b by simply making it a function of Cval,b −Aval,b instead of pn −Aval,b.
LCalFocal(pn) = −(1− pn)γb log pn, with, γb = exp(λ(Cval,b −Aval,b)) (2)
where, b is the validation-bin in which pn falls. As shown in Appendix D, it’s performance is very similar (or slightly better than) CalFocal in Eq. 1. Further, it makes more sense to update γ based on how far Cval,b is from Aval,b instead of how far pn is from Aval,b because, as shown in Figure 3(d) bin-0, one may find Cval,b (starred lines) quite closer to Aval,b (dashed lines) even when pn or its
2Note that for focal loss increasing γ does not always lead to smaller gradients. This mostly holds true in the region pn approximately > 0.2 (see Figure 3(a) in Mukhoti et al. (2020)). However, in practice and as shown by the training-bin boundaries of bin-0 and bin-1 in Figure in Figure 11 Appendix C, we find majority of the training samples to lie above 0.2 during the majority of the training, and therefore, for the experiments in this paper, we simply stick to the rule of increasing γ to decrease gradients and stop pn from increasing and vice versa.
average equivalent Ctrain (solid lines) is far from Aval,b. At this point when Cval,b = Aval,b, we should stop updating γ further, even though pn −Aval,b 6= 0, as we have reached our goal of making Eval,b = Cval,b −Aval,b = 0. Therefore, we use Eq. 2 of Case 2 as base for AdaFocal.
Limitations of CalFocal: (1) Let’s say at some point of training, a high γb over the next few epochs reduces the calibration error Eval,b = Cval,b −Aval,b. Then, it is desirable to continue the training with the same high γb. However, note CalFocal’s update rule in Eq. 2 which will reduce γ → 1 as the Cval,b − Aval,b → 0. (2) At some point let’s say Cval,b − Aval,b is quite high. This will set γb to some high value as well depending on the hyper-parameter λ. Assuming this γb is still not high enough to bring down the confidence, we would want a way to further increase γb. However, CalFocal is incapable of doing so as it will continue to hold at γb = exp(λ(Cval,b − Aval,b)). By addressing these two issues in the next sub-section we present the final algorithm for AdaFocal.
5.2 CALIBRATION-AWARE ADAPTIVE FOCAL LOSS (ADAFOCAL)
A straightforward way to address the above limitations is to make γb,t depend on γb,t−1 i.e.
L(pn, t) = −(1− pn)γb,t log pn, with, γb,t = γb,t−1 ∗ exp(Cval,b −Aval,b). (3) This update rule address the limitations of CalFocal in the following way. Let’s say at some point we observe over-confidence i.e. Eval,b = Cval,b − Aval,b > 0. Then, in the next step γb will be increased. In the subsequent steps, it will continue to increase unless the calibration error Eval,b starts decreasing (this additional increase in γ was not possible with CalFocal). At this point, if we find Eval,b to start decreasing, that would reduce the increase in γb over the next epochs and γb will ultimately settle down to a value when Eval,b = 0 (CalFocal at Eval,b = 0 will cause γ to go down to 1). Next, if this current value of γb starts causing under-confidence i.e. Cval,b −Aval,b < 0, then the update rule will kick in to reduce γ thus allowing Cval,b to be increased back to Aval,b. This oscillating behaviour of AdaFocal around the desired point of Cval,b = Aval,b is its main adavantage in reducing calibration error in every bin. Additionally, also note the absence of the hyper-parameter λ in the exponent of Eq. 3 which makes AdaFocal hyper-parameter free.
Finally, note an undesirable property of Eq. 3 which is the unbounded exponential update. This may easily cause γt to explode as it can be expanded as γt = γt−1 exp(Eval,t) = γ0 exp(Eval,0 + Eval,1 + ... + Eval,t−1 + Eval,t). Thus if Eval,t > 0 for quite a few number of epochs, γt will become so large that even if Eval,t < 0 in the subsequent epochs, it may not decrease to a desired level. We remedy this by simply constraining γt to an upper bound γmax to get the AdaFocal loss as
LAdaFocal(pn, t) = −(1− pn)γb,t log pn, with, γb,t = min{γmax, γb,t−1 ∗ eCval,b−Aval,b} (4) An algorithmic description of training with AdaFocal (or CalFocal) is given in Algorithm 1. Limitation: One may argue that γmax is again a hyper-parameter; however, note that it does not require any special fine-tuning. Its sole purpose is to stop γ from exploding and any reasonable value around 20 works quite well in practice. For all our experiments, we use γmax = 20. For comparison of AdaFocal with γmax = 20, γmax = 50 and unconstrained γmax =∞, please refer to Appendix L.
6 EXPERIMENTS
Experimental setup We evaluate the performance of our proposed method on image and text classification tasks. For image classification, we use CIFAR-10, CIFAR-100 Krizhevsky (2009), Tiny-ImageNet Deng et al. (2009), and ImageNet Russakovsky et al. (2015) to analyze the calibration of ResNet50, ResNet-100 He et al. (2016), Wide-ResNet-26-10 Zagoruyko & Komodakis (2016), and DenseNet-121 Huang et al. (2017) models. For text classification, we use the 20 Newsgroup dataset Lang (1995) and train the Global Pooling CNN model Lin et al. (2014). Further details about the datasets, models and experimental configurations are given in Appendix E.
Baseline As baseline calibration methods we use MMCE Kumar et al. (2018), Brier loss Brier (1950), Label smoothing Müller et al. (2019) and sample-dependent focal loss FLSD-53. We also report the effect of temperature scaling Guo et al. (2017) on top of these calibration methods. Following Mukhoti et al. (2020), we select the optimal temperature that produces the minimum ECE on the validation set by searching in the interval (0, 10] with step size of 0.1.
Results. In Figure 4, we compare AdaFocal against cross entropy (CE) and FLSD-53 for ResNet-50 trained on various small to large-scale image datasets. We chose FLSD-53 as our competitive baseline
as it was shown to be consistently better than MMCE, Brier Loss and Label smoothing Mukhoti et al. (2020) across many datasets-model pairs. The figure plots the test set error and ECE calibration metric. In Figure 5, for ResNet-50 on CIFAR-10 and ImageNet, we plot (1) the calibration statistics Eval = Cval − Aval of the validation set and (2) the dynamics of associated γt used by AdaFocal during the training for a few bins covering lower, middle, and higher probability regions.
From these figures, we first observe that for CIFAR-10, CIFAR-100 and Tiny-ImageNet, FLDS-53 is much better calibrated than CE. This is because, as shown in Figure 5(a) for ResNet-50 and CIFAR-10, CE is over-confident compared to FLSD-53 in every bin. For ImageNet, however, the behaviour is reversed: FLSD-53 is poorly calibrated than CE. The reason, as shown in Figure 5(b), is that due to the use of high values γ = 5, 3, FLSD-53 makes the model largely under-confident in each bin, leading to an overall high calibration error. This shows that FLSD-53 is a strategy based on heuristic (from a limited number of dataset-model pairs) that does not generalize well. AdaFocal, on the other hand, is well calibrated for all the four dataset-model pairs while achieving similar accuracy.
The dynamics/evolution of γt during training for different bins is shown in Figure 5: (1) for CIFAR10, we find γt to be closer to 1 for higher bins and closer to 20 for lower bin. These γs found by AdaFocal result in better calibration than γ = 5, 3 of FLSD-53. (2) for ImageNet, we find AdaFocal’s
γ → 0. This makes sense because for ImageNet, from Figure 4(d), cross entropy (i.e. γ = 0 for every bin) is much better calibrated than FLSD-53 and AdaFocal (starting from γ = 1) also ultimately settles down to CE (γ = 0) to achieve a similar level of calibration. This confirms that during training, unlike CE or FLSD-53, AdaFocal being aware of the network’s current under/over-confidence is able to guide the γs to the values that maintain a well calibrated model at every step. Also note that for an unseen dataset-model pair there’s no way to know beforehand which γ will perform better but these empirical evidence show that AdaFocal will automatically find those appropriate γs.
Rest of the experiments are shown in Table 1 (ECE) and Table 2 (Error)3. From Table 1, we observe that prior to temperature scaling AdaFocal outperforms the baseline methods by a substantial margin in 9 out of 11 cases. With post-temperature scaling included, AdaFocal achieves the lowest calibration error in 7 out of the 11 experiments. Further, observe that in many cases temperature scaling on top of AdaFocal does not offer any improvement (optimal temperature = 1). For the rest, the optimal temperature is close to 1 indicating that AdaFocal produces innately calibrated models during training itself. The consistency of AdaFocal across other calibration metrics is shown through AdaECE and classwise-ECE in Appendix F. ECEdebias (15 and 30 bins), ECEEW−sweep (equal-width), and ECEEM−sweep (equal-mass) are reported in Appendix G. Significance of the results is confirmed through ECE error bars with mean and standard deviations computed over 5 runs in Appendix H.
Number of bins The ECE metrics in the paper are reported using 15 bins. For AdaFocal training we experiment with 5, 10, 15, 20, 30, and 50 equal-mass (adaptive) binning when drawing calibration statistics form the validation set as reported in Appendix I. We find the best results to be from the range 10 to 20. Performance degrades when the number of bins are too small (< 10) or too large (> 20), therefore, for the AdaFocal training in the paper we use 15 bins as well.
Out-of-Distribution (OOD) detection. Following Mukhoti et al. (2020), we report the performance of AdaFocal on an OOD detection task. We train ResNet-110 and Wide-ResNet26-10 on
3While reproducing the baseline experiments in Mukhoti et al. (2020) we obtained very similar results, therefore, we simply borrow the exact values to maintain consistent comparison.
Dataset Model Cross Entropy Brier Loss MMCE LS-0.05 FLSD-53 AdaFocal
CIFAR-10 as the in-distribution data and test on SVHN Netzer et al. (2011) and CIFAR-10-C Hendrycks & Dietterich (2019) (with level 5 Gaussian noise corruption) as OOD data. Using entropy of the softmax as the measure of uncertainty, the corresponding ROC plots are shown in Figure 6 and AUROC scores are reported in Table 10 in Appendix J. We see that models trained with AdaFocal outperform focal loss γ = 3 (FL-3) and FLSD-53. For the exact AUROC scores, please refer to Appendix J. These results further highlight the benefit of an inherently calibrated model produced using AdaFocal as post-hoc techniques such as temperature scaling, as shown in the figure, is ineffective under distributional shift Snoek et al. (2019).
7 CONCLUSION
In this work, we first revisit the calibration properties of regular focal loss and highlight the downside of using a fixed γ for all samples. Particularly, by studying the calibration behaviour of different samples in different probability region, we find that there’s no single γ that achieves the best calibration over the entire region. We use this observation to motivate the selection of γ independently for each sample (or group of samples) based on the knowledge of network’s under/over-confidence. We propose a calibration-aware adaptive focal loss called AdaFocal that accounts for such information and updates the γt at every step based on γt−1 from the previous step and the magnitude of network’s under/over-confidence. We find AdaFocal to perform consistently better across different datasetmodel pairs producing innately calibrated models that most times do not substantially benefit from post-hoc processing of temperature scaling. Additionally, we find models trained with AdaFocal to be significantly better in out-of-distribution detection task.
Reproducibility For reproducibility, we have include in the supplementary material a zip file that contains the code base for running the experiments. For running particular experiments
• CIFAR-10, ResNet-50, Cross entropy: python train.py –dataset cifar10 –model resnet50 –loss cross_entropy –num_bins 15 -e 400 –save-path experiments/cifar10_resnet50_ce
• CIFAR-100, ResNet-50, Cross entropy: python train.py –dataset cifar100 –model resnet50 –loss cross_entropy –num_bins 15 -e 400 –save-path experiments/cifar100_resnet50_ce
• Tiny-ImageNet, ResNet-50, Cross entropy: python train.py –dataset tiny_imagenet –model resnet50_ti –loss cross_entropy –num_bins 15 –first-milestone 40 –secondmilestone 60 -e 100 -b 64 -tb 64 –dataset-root data/tiny-imagenet-200 –save-path experiments/tinyImageNet_resnet50_ce
• 20 Newgroups, CNN, Cross entropy: python main.py –loss cross_entropy –num-epochs 50 –num-bins 15 –save-path experiments/cnn_ce
APPENDICES
A ADAFOCAL’S GENERALIZATION TO LARGE SCALE DATASET (IMAGENET)
For ImageNet, FLSD-53 seems to perform very poorly in terms of calibration. The reason is that due to higher values of γ = 5, 3 FLSD-53 becomes extremely under-confident in each bin leading to a high calibration error. AdaFocal, on the hand, remains well calibrated which confirms that during training, unlike CE or FLSD-53, AdaFocal being aware of the network’s current under/overconfidence (through the validation set) is able to adjusts the γs in a way that maintains a well calibrated model at every step. Further, in Figure 8, note the dynamics/evolution of γt in different bins. For ImageNet, we find AdaFocal’s γ → 0 which makes sense because, from Figure 7, cross entropy (i.e. γ = 0 for every bin) is much better calibrated than FLSD-53 and AdaFocal (starting from γ = 1) settles down as CE (γ = 0). Note that for an unseen dataset-model pair it is not possible to know beforehand whether CE or focal loss will perform better. However, from these experiments, we find strong evidence that, for any dataset-model pair, AdaFocal will lead to the γs that result in the best calibration.
B CALIBRATION BEHAVIOUR OF FOCAL LOSS IN DIFFERENT BINS
In the main paper, we showed the calibration behavior of different focal losses for ResNet50 trained on CIFAR-10 for only a few bins. For completeness, the rest of the bins and their calibration error Ei = Cval,i − Aval,i are shown in Figure 9 for focal losses with γ = 0, 3, 4, 5. We observe that there’s no single γ that performs the best across all the bins. Rather, every bin seems to have a particular γ that achieves the best calibration.
C CORRESPONDENCE BETWEEN CONFIDENCE OF TRAINING AND VALIDATION SAMPLES
D CALFOCAL LOSS
E DATASETS AND EXPERIMENTS
E.1 DATASET DESCRIPTION
CIFAR-10 Krizhevsky (2009): This dataset contains 60, 000 coloured images of size 32 × 32, which are equally divided into 10 classes. A split of 45, 000/5, 000/10, 000 images is used as train/validation/test sets respectively.
CIFAR-100 Krizhevsky (2009): This dataset contains 60, 000 coloured images of size 32 × 32, which are equally divided into 100 classes. A split of 45, 000/5, 000/10, 000 images is used as train/validation/test sets respectively.
ImageNet Russakovsky et al. (2015): ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2012-2017 is an image classification and localization dataset. This dataset spans 1000 object classes and contains 1,281,167 training images and 50,000 validation images.
Tiny-ImageNet Deng et al. (2009): It is a subset of the ImageNet dataset with 64× 64 dimensional images and 200 classes. It has 500 images per class in the training set and 50 images per class in the validation set.
20 Newsgroups Lang (1995): This dataset contains 20, 000 news articles, categorised evenly into 20 different newsgroups. Some of the newsgroups are very closely related to each other (e.g. comp.sys.ibm.pc.hardware / comp.sys.mac.hardware), while others are highly unrelated (e.g misc.forsale / soc.religion.christian). We use a train/validation/test split of 15, 098/900/3, 999 documents.
E.2 EXPERIMENTAL DETAILS
For all our experiments, we have used Nvidia Titan X Pascal GPU with 12GB memory.
CIFAR-10 and CIFAR-100: We use SGD with a momentum of 0.9 as our optimiser, and train the networks for 350 epochs, with a learning rate of 0.1 for the first 150 epochs, 0.01 for the next 100 epochs, and 0.001 for the last 100 epochs. We use a training batch size of 128. The training data is augmented by applying random crops and random horizontal flips.
Tiny-ImageNet: We use SGD with a momentum of 0.9 as our optimiser, and train the models for 100 epochs with a learning rate of 0.1 for the first 40 epochs, 0.01 for the next 20 epochs and 0.001 for the last 40 epochs. We use a training batch size of 64. Note that we use 50 samples per class (i.e. a total of 10000 samples) from the training set as the validation set. Hence, the training is only on 90000 images. We use the Tiny-ImageNet validation set as our test set.
ImageNet: We use SGD as our optimiser with momentum of 0.9 and weight decay 10−4, and train the models for 90 epochs with a learning rate of 0.01 for the first 30 epochs, 0.001 for the next 30 epochs and 0.0001 for the last 30 epochs. We use a training batch size of 128. We divide the 50,000 validation images into validation and test set of 25,000 images each.
20 Newsgroups: We train the Global Pooling CNN Network Lin et al. (2014) using the Adam optimiser, with learning rate 0.001, and default betas 0.9 and 0.999. We used Glove word embeddings
Pennington et al. (2014) to train the network. We train the model for 50 epochs and use the model at the end to evaluate the performance.
All our experiments are implemented in PyTorch. The hyperparameters that are not explicitly mentioned above are set to their default values. For CIFAR-10/100 and Tiny-ImageNet, AdaFocal is implemented on top of the base code available from Mukhoti (2020). The code for 20 Newsgroups is implemented in PyTorch by adapting the code (TensorFlow) available from Kumar (2018).
The experimental results in the paper are reported for the model at the end of (1) CIFAR-10/100: 350 epochs, (2) Tiny-ImageNet: 100 epochs, (3) ImageNet: 90 epochs, and (4) 20 NewsGroups: 50 epochs.
F ADAECE AND CLASSWISE-ECE PERFORMANCE
Here, we compare the performance of AdaFocal against the baseline methods in terms of AdaECE and classwise-ECE in Table 3 and 4 respectively. For CIFAR-10/100, the values are reported for the model at the end of 350 epochs; for Tiny-ImageNet, at the end of 100 epochs; and for 20 NewsGroup dataset, at the end of 50 epochs. From these tables, we observe that AdaFocal outperforms all the baseline methods by a substantial margin, especially if we compare the pre-temperature scaling results.
Dataset Model Cross Entropy Brier Loss MMCE LS-0.05 FLSD-53 AdaFocal
Dataset Model Cross Entropy Brier Loss MMCE LS-0.05 FLSD-53 AdaFocal
Dataset Model Cross Entropy FLSD-53 AdaFocal
Dataset Model Cross Entropy FLSD-53 AdaFocal
Dataset Model Cross Entropy FLSD-53 AdaFocal
G DEBISED ESTIMATES OF ECE
Dataset Model Cross Entropy FLSD-53 AdaFocal
H ECE ERROR BARS
I NUMBER OF BINS USED DURING ADAFOCAL TRAINING
Experiment details: CIFAR-10, ResNet50 trained for 350 epochs. The reported results below are without temperature scaling. Our method AdaFocal with 5, 10, 15, 20, 30, and 50 adaptive (equal mass) bins vs FLSD-53. Note here that there are two types of binning:
J AUROC FOR OUT-OF-DISTRIBUTION DETECTION
For ResNet110 on CIFAR-10/SVHN, we were not able to reproduce the reported results of 96.74, 96.92 for FL-3 in Mukhoti et al. (2020). Instead we found those values to be 90.27, 90.39 and report them in Table 10
K MOVING AVERAGE γ-UPDATE RULE
For the focal loss in the paper L(pn, t) = −(1− pn)γb,t log pn, the unconstrained γ-update rule for AdaFocal is given by
γt+1 = γt ∗ exp(Cval,t+1 −Aval,t+1) (5) = γt ∗ exp(Eval,t+1) (6)
If instead we use exponential moving average to update γ, then the update rule (let’s call it MA-α) is given by
γt+1 = (γt) α ∗ ( eEval,t+1 )1−α (7)
= γαt−1 ∗ eαEval,t ∗ e(1−α)Eval,t+1 (8) = γαt−1 ∗ e[αEval,t+(1−α)Eval,t+1] (9)
The evolution or dynamics of γ is given in Figure 16.
L MULTIPLE RUNS OF ADAFOCAL WITH DIFFERENT γmax
Due to the stochastic nature of the experiments, AdaFocal γs may end up following different trajectories across different runs (initialization), which in turn might lead to variations in the final results. In this section, we look at the extent of such variations for (1) unconstrained γ (2) γ capped by γmax = 20 and (3) γ capped by γmax = 50. We study this for ResNet-50 trained multiple times on CIFAR-10 starting with a different random seed.
L.1 ADAFOCAL, γmax = 20
In Figure 17, we observe that AdaFocal with γmax = 20 is consistently (9 out of 9 times) better than FLSD-53. Figure 18 shows the evolution of γ across different runs.
L.2 ADAFOCAL, γmax = 50
In Figure 19, we observe that AdaFocal with γmax = 50 has more variability than AdaFocal γmax = 20 but is mostly better than FLSD-53. Figure 19 shows the evolution of γ across different runs.
L.3 ADAFOCAL, UNCONSTRAINED γ
In Figure 21, we observe that AdaFocal with unconstrained γ does exhibit some variability across different runs: 7 out of 9 times it performs better than FLSD-53 whereas the other two times it is similar or slightly worse.
The above behaviour is mostly due to the variations in the trajectory of γs for lower bins, as shown in Figure 22. For higher bins, we see the γs settling to similar values, however for lower bins, as the γs are unconstrained they blow up to very high values.
M ERROR, ECE AND BIN STATISTICS PLOTS FOR REST OF THE EXPERIMENTS | 1. What is the focus of the paper regarding deep learning?
2. What are the strengths of the proposed approach, particularly in its adaptive adjustment mechanism?
3. What are the weaknesses of the paper, especially regarding its experimental results and generalizability?
4. How does the reviewer assess the clarity and quality of the figures presented in the paper?
5. Are there any concerns about the impact of the proposed method on test accuracies, and how does the reviewer suggest improving the method? | Summary Of The Paper
Review | Summary Of The Paper
This paper studies the calibration of deep learning, which aims to make confidence store accurately describe predictions' correctness probabilities. The authors improve focal loss and propose a calibration-aware focal loss for better calibration. The proposed approach adaptively adjusts the coefficient of focal loss according to the momentums and current predictions' confidence. The authors conduct experiments on SVNH, CIFAR10/100 datasets to verify the approach's efficacy.
Review
Strength:
This paper gives an in-depth analysis of the calibration problem in deep learning. The authors investigate the calibration properties of focal loss, observe that fixed
γ
is not optimal for better calibration, and find a correspondence to adjust
γ
in each iteration adaptively.
The authors present many figures and investigation experiments for analysis and verification of effectiveness. The proposed method is effective in CIFAR10/100 and SVNH datasets.
Weakness:
The generalization of the proposed method is not promised. The observed problem of fixed
γ
are empirical and may vary between datasets. On the other hand, the optimal hyperparameters for the proposed method may also vary between datasets. Besides, it would be better to report the large-scale datasets' results, such as ImageNet. The current experimental results are not convincing.
The figures in this paper are not easy to understand. (a)There are too many figures, and hard to understand the representations of each line. Some legends even block important information. The authors may present less but important figures with concise conclusions in captions. (b) It would be better to smooth some lines in some figures. For example, figure 1 will be better if the lines are smoothed. (c)Some figures are too small to read after printing. And some figures are not readable without color printing.
How will the proposed method affect the test accuracies? Will the proposed method lead to an accuracy drop as the expense for improved calibration? Figure 4 reports the accuracies but is very hard to tell the accuracies difference. |
ICLR | Title
AdaFocal: Calibration-aware Adaptive Focal Loss
Abstract
Much recent work has been devoted to the problem of ensuring that a neural network’s confidence scores match the true probability of being correct, i.e. the calibration problem. Of note, it was found that training with focal loss leads to better calibrated deep networks than cross-entropy, while achieving the same level of accuracy Mukhoti et al. (2020). This success stems from focal loss regularizing the entropy of the network’s prediction (controlled by the hyper-parameter γ), thereby reining in the network’s overconfidence. Further improvement is expected if γ is selected independently for each training sample. However, the proposed Sample-Dependent Focal Loss (FLSD) in Mukhoti et al. (2020) is based on simple heuristics that does not take into account the difference in the network’s calibration behaviour for different samples (or groups of samples). As a result it is only slightly better than focal loss with fixed γ. In this paper, we propose a calibration-aware version of FLSD, called AdaFocal, which, at every training step t, adaptively modifies the γ for individual group of samples based on (1) γt−1 from the previous training step (2) the magnitude of the network’s under/over-confidence for those groups. We evaluate our method on various small to large-scale image recognition tasks and one NLP task, covering a variety of network architectures, to confirm that AdaFocal consistently achieves improved calibration without a significant loss in accuracy. Further, the models trained with AdaFocal are shown to have significantly improved Out-of-Distribution (OOD) detection capability.
1 INTRODUCTION
Neural networks have found tremendous success in almost every field including computer vision, natural language processing, and speech recognition. Over time, these networks have grown complex and larger in size to achieve state-of-the-art performance and they continue to evolve further in that direction. However, it has been well established that such high capacity networks suffer from poor calibration Guo et al. (2017), i.e. the confidence scores of the predictions do not reflect the real world probabilities of those predictions being true. For example, if the network assigns 0.8 confidence to a set of predictions, we should expect 80% of those predictions to be correct. However, this is far from reality since modern networks tend to be grossly over-confident. This is of great concern, particularly for mission-critical applications such as autonomous driving, medical diagnosis, wherein the downstream decision making not only rely on the predictions but also on their confidence.
In recent years, there has been a growing interest in developing methods for calibrating neural networks. These can be mainly divided into two categories (1) post-hoc approaches that perform calibration after training (2) methods that calibrate the model during training itself. The first includes methods such as Platt scaling Platt (1999), histogram binning Zadrozny & Elkan (2001), Isotonic regression Zadrozny & Elkan (2002), Bayesian binning and averaging Naeini et al. (2015); Naeini & Cooper (2016), and Spline fitting Gupta et al. (2021). Methods in the second category focus on training the model on an objective function that accounts for calibration as well, including Maximum Mean Calibration Error (MMCE) Kumar et al. (2018), Label smoothing Müller et al. (2019), and recently focal loss Mukhoti et al. (2020). These methods aim to produce inherently calibrated models which when combined with post training calibration methods lead to further improvements.
Contribution. Our work falls into the second category. We build upon the calibration properties of focal loss to propose a modification that further improves its performance. Firstly, we make the observation that while regular focal loss, with a fixed γ parameter, improves the overall calibration by preventing samples from being over-confident, it also leaves other samples under-confident. To
address this drawback, we propose a modification to the focal loss called AdaFocal that adjusts the γ for each training sample (or rather a group of samples) separately by taking into account the model’s under/over-confidence about a similar corresponding group in the validation set. We evaluate the performance of our method on four image classification tasks: CIFAR-10, CIFAR-100, Tiny-ImageNet and ImageNet, and one text classification task: 20 Newsgroup, using various model architectures, and show that AdaFocal substantially outperforms the regular focal loss and other state-of-the-art calibration techniques in the literature. We further study the performance of AdaFocal on an out-of-distribution detection task and find it to perform better than the competing methods. Finally, we find that the models trained using AdaFocal get innately calibrated to a level that most times do not significantly benefit from temperature scaling.
2 PROBLEM SETUP AND DEFINITIONS
Consider a classification setting where we are given a set of training data {(xn, ytrue,n)}, with xn ∈ X being the input and ytrue,i ∈ Y = {1, 2, . . . ,K} the associated ground-truth label. Using this data we wish to train a classifier fθ(x) that outputs a vector p̂ over theK classes. We also assume access to a validation set for hyper-parameter tuning and a test set for evaluating its performance. For example, fθ(·) can be a neural network with learnable parameters θ, x is an image, and p̂ is the output of a softmax layer whose kth element p̂k is the probability score for class k. We refer to ŷ = argmaxk∈Y p̂k as the network’s prediction and the associated probability score p̂ŷ as the predicted confidence, and the same quantity for the jth example is p̂ŷ,j .
In this setting, a network is said to be perfectly calibrated if the predicted confidence p̂ŷ reflects the true probability of the network classifying x correctly i.e. P(ŷ = ytrue | p̂ŷ = p) = p, ∀p ∈ [0, 1] Guo et al. (2017). Continuing our example, if the network assigns an average confidence score of 0.8 to a set of predictions then we should expect 80% of those to be correct. We define Calibration Error as E = p̂ŷ − P(ŷ = ytrue | p̂ŷ) and the Expected Calibration Error as Ep̂ŷ [E ] = Ep̂ŷ [ |p̂ŷ − P(ŷ = ytrue | p̂ŷ)| ] Guo et al. (2017). However, as the true calibration error cannot be computed empirically with a finite sized dataset, the following three approximations are generally used in the literature. That is, for a dataset {(xn, ytrue,n)}Nn=1, (1) ECE = ∑M i=1 |Bi| N |Ci − Ai| Guo et al. (2017), where Bi is equal-width bin that contains all examples j with p̂ŷ,j in the range [ iM , i+1 M ), Ci = 1 |Bi| ∑ j∈Bi p̂ŷ,j is the average confidence and Ai = 1 |Bi| ∑ j∈Bi 1(ŷj = ytrue,j) is the bin accuracy. Note that Ei = Ci − Ai is the empirical approximation of the calibration error E , (2) AdaECE = ∑M i=1 |Bi| N |Ci − Ai| Nguyen & O’Connor (2015), where ∀i, j |Bi| = |Bj | are adaptively sized (equal-mass) bins that contain an equal number of samples, and (3) ClasswiseECE Kumar et al. (2018); Kull et al. (2019) estimates the calibration over all K classes: ClasswiseECE = 1 K ∑M i=1 ∑K k=1 |Bi,k| N |Ci,k −Ai,k| where Ci,k = 1 |Bi,k| ∑ j∈Bi,k p̂k,j is the average confidence for
the kth class and Ai,k = 1|Bi,k| ∑ j∈Bi,k 1(ytrue,j = k) is the accuracy of the kth class in the ith bin.
Lastly, as ECE has been shown to be a biased estimate of true calibration Vaicenavicius et al. (2019), we additionally use two de-biased estimates of ECE namely ECEdebiased proposed in Kumar et al. (2019) and ECEsweep proposed in Roelofs et al. (2021) to further confirm our results.
3 CALIBRATION PROPERTIES OF FOCAL LOSS
Focal loss Lin et al. (2017) LFL(p) = −(1 − p)γ log p was originally proposed to improve the accuracy of classifiers by focusing on hard examples and down-weighting well classified examples. Recently it was further shown that focal loss may also result in significantly better calibrated models than cross entropy Mukhoti et al. (2020). This is because, based on the relation: LFL ≥ KL(q||p̂)− γH(p̂) where q is the one-hot target vector, focal loss while minimising the main KL divergence objective also increases the entropy of the prediction p̂. As a consequence this prevents the network from being overly confident on wrong predictions and overall improves calibration.
The regular focal loss with fixed γ, as we show in this section, does not achieve the best calibration. In Figure 1, we plot the calibration behaviour of ResNet50 in different bins when trained on CIFAR-10 with different focal losses. The ith bin’s calibration error subscripted by "val"Eval,i = Cval,i−Aval,i is computed on the validation set using 15 equal-mass binning. The figure shows the lowest (bin-0), a middle (bin-7) and highest bin (bin-7). For reference, the rest of the bins and their bin boundaries are
shown in Appendix B. From Figure 1 (a), we see that although focal loss γ = 4 achieves the overall lowest calibration error (AdaECE), there’s no single γ that performs the best across all the bins. For example, in bin-0 γ = 4, 5 seems to achieve better calibration whereas γ = 0, 3 are over-confident. For bin-7, on the other hand, γ = 3 seems to be better calibrated whereas γ = 4, 5 are under-confident and γ = 0 is over-confident.
This clearly indicates that using different γs for different bins can further improve the calibration. Such an attempt is presented in Mukhoti et al. (2020) called the Sample-Dependent Focal Loss (FLSD-53) which assigns γ = 5 if the training sample’s true class posterior p̂ytrue ∈ [0, 0.2) and γ = 3 if p̂ytrue ∈ [0.2, 1]. However, this strategy is fixed for every dataset-model pair and is based on simple heuristics of choosing higher γ for smaller values of p̂ytrue and relatively lower γ for higher values of p̂ytrue . However, from Figure 1(b), we see that FLSD-53 is also not the best strategy across all the bins. This, therefore, motivates the design of a γ selection strategy that can assign an appropriate γ for each bin based on the magnitude and sign of Eval,i. However, in order to design such a strategy we need solutions to the following two major challenges:
1. How do we find some correspondence between the "confidence of training samples", which we can manipulate during training by adjusting the entropy regularising parameter γ, and the "confidence of the validation samples", which we want to be actually manipulated but do not have direct control over? In other words, in order to indirectly control the confidence of a particular group of validation samples, how do we know which particular group of training samples’ confidence to be manipulated?
2. Given that there is a correspondence between a training group and a validation group (even if it’s loose), how do we arrive at the exact values of γ that will lead to better calibration?
We try to answer the first question in the next section and the answer to the second question leads to AdaFocal which is the main contribution of the paper.
4 CORRESPONDENCE BETWEEN CONFIDENCE OF TRAIN AND VAL. SAMPLES
In order to find some correspondence, an intuitive thing to do would be to group the validation samples into M equal-mass validation-bins, and then use these validation-bin boundaries to group the training samples as well. Then, we can compare the average confidence of the validation samples and the average confidence of the training samples, in the same validation-bin, to check for any correspondence.
Quantities of interest For binning validation samples, we always look at the confidence of the top predicted class ŷ denoted by p̂val,top (bin average: Cval,top). For training samples, on the other hand, instead of the confidence of the top predicted class ŷ denoted by p̂train,top (bin average: Ctrain,top), we will focus on the confidence of the true class ytrue denoted by p̂train,true (average: Ctrain,true) because during training we only care about p̂train,true which is manipulated through some loss function. For reference however, Figure 10 in Appendix C compares Ctrain,true and Ctrain,top to show that as the training set accuracy approaches 100%, the top predicted class and the true class for a training sample become the same. Henceforth, for a cleaner notation, we will always refer to Ctrain ≡ Ctrain,true and Cval ≡ Cval,top.
Common binning When training samples are grouped using the bin boundaries of the validationbins. In Figure 2(b), we compareCtrain,i in validation-bin-i 1 withCval,i in the same validation-bin-i, and find that there is indeed a good correspondence between the two quantities. For example in Figure 2(b), as γ increases from 0, 3 to 5, the solid-line (Ctrain,i) gets lower, and the same behaviour is observed on the starred-line (Cval,i) as well. For completeness, rest of the bins are shown in Figure 12 Appendix C. This is very encouraging as now we can expect (even though loosely) that if we increase/decrease the confidence of a group of training samples in some lower (or middle, or higher) probability region then the same will be reflected on a similar group of validation samples in lower (or middle, or higher) probability region. This therefore provides a way to indirectly control the value of Cval,i by manipulating Ctrain,i, and from a calibration point of view, our strategy going forward would be to exploit this correspondence to keep Ctrain,i (which we have control over during training) closer to Aval,i (the validation set accuracy in validation-bin-i) so that Cval,i also stays closer to Aval,i to overall reduce the calibration error Eval,i = Cval,i −Aval,i.
Independent binning Before proceeding, for completeness, we also look at the case when training samples and validation samples are grouped independently into their respective training-bins and validation-bins. Figure 2(a) compares Ctrain,i in training-bin-i with Cval,i in validation-bin-i. We observe a similar behaviour as mentioned above. Note that since the binning is independent, the boundaries of training-bin-imay not be exactly the same as that of validation-bin-i, however as shown in Figure 11 Appendix C (along with rest of the bins and their bin boundaries), they are quite close, meaning that a training group in lower (/middle/higher) probability region have good correspondence with the validation group in a similar nearby region.
Going forward, for the ease of algorithm design, we will simply stick to the case of "common binning" where training samples are grouped as per validation-bin boundaries. This will allows us to maintain a one-to-one correspondence between the boundaries of the ith training and validation group.
5 PROPOSED METHOD
Let’s denote the nth training sample’s true class posterior p̂ytrue by pn. Given that pn falls into validation-bin-b, our goal is to keep pn, or as per the discussion above its averaged equivalent Ctrain,b, closer to Aval,b so that the same is reflected on Cval,b. For manipulating pn, we will utilize the regularization effect that focal loss’s parameter γ has on the confidence of the predictions Mukhoti
1It may happen that no training sample belong to a particular validation-bin’s boundaries. In that case, Ctrain,i has been shown to drop to zero for example in bin-14 in Figure 2 (b).
et al. (2020). At this point, one can choose to update γb either based on (1) how far pn is from Aval,b i.e. γ = f(pn − Aval,b) or (2) how far Cval,b is from Aval,b i.e. γ = f(Cval,b − Aval,b). Such a γ-update-rule should ensure that whenever the model is over-confident, i.e. pn > Aval,b (or Cval,b > Aval,b), γ is increased so that the gradients get smaller which prevents pn from increasing further. On the other hand, when pn < Aval,b (or Cval,b < Aval,b), i.e. the model is under-confident, we decrease γ so as to get larger gradients that in turn will increase pn 2.
Based on this discussion, next we design and study a calibration-aware γ-update strategy called CalFocal, which with some additional modifications lead to AdaFocal.
5.1 CALIBRATION AWARE FOCAL LOSS (CALFOCAL)
Case 1: γ = f(pn −Aval,b) Treating Aval,b as the point that we want pn to not deviate from, we make the focal loss parameter γ a function of pn −Aval,b to get
LCalFocal(pn) = −(1− pn)γn log pn, with, γn = exp(λ(pn −Aval,b)), (1)
where, b is the validation-bin in which pn falls. The hyper-parameter λ is the scaling factor which combined with the exponential function helps to quickly ramp up/down γ. The exponential function adheres to the γ-update rule mentioned earlier and also ensures γ is > 0. Figure 3(a) plots LCalFocal vs. pn for Aval,b = 0.8. We see that based on the strength of λ, the loss drastically drops near pn = 0.8 and thereafter remains close to zero. This shows that LCalFocal aims is to first push p towards 0.8 and then slow its growth towards overconfidence. Next, in Figure 3(c), we find that CalFocal with λ = 10, 100 is able to reduce the calibration error compared to cross entropy but it is still far from FLSD-53’s performance. Also note in Figure 3(b) that too high λ (=100) affects the accuracy of model. Most importantly, Figure 3(d) compares Ctrain,i with Cval,i (and also Aval,i) for bin-0, where we find some evidence that the strategy of bringing pn or Ctrain,i (solid lines) closer to Aval,i (dashed lines) results in Cval,i (starred lines) getting closer to Aval,i as well, thus reducing the calibration error Eval,i = Cval,i −Aval,i slightly.
Case 2: γ = f(Cval,b − Aval,b) Note that Eq. 1 assigns a different γn for each training sample. To reduce computation and avoid using a different γn for each training sample, one can instead use a common γb for all the training samples that fall into the validation-bin-b by simply making it a function of Cval,b −Aval,b instead of pn −Aval,b.
LCalFocal(pn) = −(1− pn)γb log pn, with, γb = exp(λ(Cval,b −Aval,b)) (2)
where, b is the validation-bin in which pn falls. As shown in Appendix D, it’s performance is very similar (or slightly better than) CalFocal in Eq. 1. Further, it makes more sense to update γ based on how far Cval,b is from Aval,b instead of how far pn is from Aval,b because, as shown in Figure 3(d) bin-0, one may find Cval,b (starred lines) quite closer to Aval,b (dashed lines) even when pn or its
2Note that for focal loss increasing γ does not always lead to smaller gradients. This mostly holds true in the region pn approximately > 0.2 (see Figure 3(a) in Mukhoti et al. (2020)). However, in practice and as shown by the training-bin boundaries of bin-0 and bin-1 in Figure in Figure 11 Appendix C, we find majority of the training samples to lie above 0.2 during the majority of the training, and therefore, for the experiments in this paper, we simply stick to the rule of increasing γ to decrease gradients and stop pn from increasing and vice versa.
average equivalent Ctrain (solid lines) is far from Aval,b. At this point when Cval,b = Aval,b, we should stop updating γ further, even though pn −Aval,b 6= 0, as we have reached our goal of making Eval,b = Cval,b −Aval,b = 0. Therefore, we use Eq. 2 of Case 2 as base for AdaFocal.
Limitations of CalFocal: (1) Let’s say at some point of training, a high γb over the next few epochs reduces the calibration error Eval,b = Cval,b −Aval,b. Then, it is desirable to continue the training with the same high γb. However, note CalFocal’s update rule in Eq. 2 which will reduce γ → 1 as the Cval,b − Aval,b → 0. (2) At some point let’s say Cval,b − Aval,b is quite high. This will set γb to some high value as well depending on the hyper-parameter λ. Assuming this γb is still not high enough to bring down the confidence, we would want a way to further increase γb. However, CalFocal is incapable of doing so as it will continue to hold at γb = exp(λ(Cval,b − Aval,b)). By addressing these two issues in the next sub-section we present the final algorithm for AdaFocal.
5.2 CALIBRATION-AWARE ADAPTIVE FOCAL LOSS (ADAFOCAL)
A straightforward way to address the above limitations is to make γb,t depend on γb,t−1 i.e.
L(pn, t) = −(1− pn)γb,t log pn, with, γb,t = γb,t−1 ∗ exp(Cval,b −Aval,b). (3) This update rule address the limitations of CalFocal in the following way. Let’s say at some point we observe over-confidence i.e. Eval,b = Cval,b − Aval,b > 0. Then, in the next step γb will be increased. In the subsequent steps, it will continue to increase unless the calibration error Eval,b starts decreasing (this additional increase in γ was not possible with CalFocal). At this point, if we find Eval,b to start decreasing, that would reduce the increase in γb over the next epochs and γb will ultimately settle down to a value when Eval,b = 0 (CalFocal at Eval,b = 0 will cause γ to go down to 1). Next, if this current value of γb starts causing under-confidence i.e. Cval,b −Aval,b < 0, then the update rule will kick in to reduce γ thus allowing Cval,b to be increased back to Aval,b. This oscillating behaviour of AdaFocal around the desired point of Cval,b = Aval,b is its main adavantage in reducing calibration error in every bin. Additionally, also note the absence of the hyper-parameter λ in the exponent of Eq. 3 which makes AdaFocal hyper-parameter free.
Finally, note an undesirable property of Eq. 3 which is the unbounded exponential update. This may easily cause γt to explode as it can be expanded as γt = γt−1 exp(Eval,t) = γ0 exp(Eval,0 + Eval,1 + ... + Eval,t−1 + Eval,t). Thus if Eval,t > 0 for quite a few number of epochs, γt will become so large that even if Eval,t < 0 in the subsequent epochs, it may not decrease to a desired level. We remedy this by simply constraining γt to an upper bound γmax to get the AdaFocal loss as
LAdaFocal(pn, t) = −(1− pn)γb,t log pn, with, γb,t = min{γmax, γb,t−1 ∗ eCval,b−Aval,b} (4) An algorithmic description of training with AdaFocal (or CalFocal) is given in Algorithm 1. Limitation: One may argue that γmax is again a hyper-parameter; however, note that it does not require any special fine-tuning. Its sole purpose is to stop γ from exploding and any reasonable value around 20 works quite well in practice. For all our experiments, we use γmax = 20. For comparison of AdaFocal with γmax = 20, γmax = 50 and unconstrained γmax =∞, please refer to Appendix L.
6 EXPERIMENTS
Experimental setup We evaluate the performance of our proposed method on image and text classification tasks. For image classification, we use CIFAR-10, CIFAR-100 Krizhevsky (2009), Tiny-ImageNet Deng et al. (2009), and ImageNet Russakovsky et al. (2015) to analyze the calibration of ResNet50, ResNet-100 He et al. (2016), Wide-ResNet-26-10 Zagoruyko & Komodakis (2016), and DenseNet-121 Huang et al. (2017) models. For text classification, we use the 20 Newsgroup dataset Lang (1995) and train the Global Pooling CNN model Lin et al. (2014). Further details about the datasets, models and experimental configurations are given in Appendix E.
Baseline As baseline calibration methods we use MMCE Kumar et al. (2018), Brier loss Brier (1950), Label smoothing Müller et al. (2019) and sample-dependent focal loss FLSD-53. We also report the effect of temperature scaling Guo et al. (2017) on top of these calibration methods. Following Mukhoti et al. (2020), we select the optimal temperature that produces the minimum ECE on the validation set by searching in the interval (0, 10] with step size of 0.1.
Results. In Figure 4, we compare AdaFocal against cross entropy (CE) and FLSD-53 for ResNet-50 trained on various small to large-scale image datasets. We chose FLSD-53 as our competitive baseline
as it was shown to be consistently better than MMCE, Brier Loss and Label smoothing Mukhoti et al. (2020) across many datasets-model pairs. The figure plots the test set error and ECE calibration metric. In Figure 5, for ResNet-50 on CIFAR-10 and ImageNet, we plot (1) the calibration statistics Eval = Cval − Aval of the validation set and (2) the dynamics of associated γt used by AdaFocal during the training for a few bins covering lower, middle, and higher probability regions.
From these figures, we first observe that for CIFAR-10, CIFAR-100 and Tiny-ImageNet, FLDS-53 is much better calibrated than CE. This is because, as shown in Figure 5(a) for ResNet-50 and CIFAR-10, CE is over-confident compared to FLSD-53 in every bin. For ImageNet, however, the behaviour is reversed: FLSD-53 is poorly calibrated than CE. The reason, as shown in Figure 5(b), is that due to the use of high values γ = 5, 3, FLSD-53 makes the model largely under-confident in each bin, leading to an overall high calibration error. This shows that FLSD-53 is a strategy based on heuristic (from a limited number of dataset-model pairs) that does not generalize well. AdaFocal, on the other hand, is well calibrated for all the four dataset-model pairs while achieving similar accuracy.
The dynamics/evolution of γt during training for different bins is shown in Figure 5: (1) for CIFAR10, we find γt to be closer to 1 for higher bins and closer to 20 for lower bin. These γs found by AdaFocal result in better calibration than γ = 5, 3 of FLSD-53. (2) for ImageNet, we find AdaFocal’s
γ → 0. This makes sense because for ImageNet, from Figure 4(d), cross entropy (i.e. γ = 0 for every bin) is much better calibrated than FLSD-53 and AdaFocal (starting from γ = 1) also ultimately settles down to CE (γ = 0) to achieve a similar level of calibration. This confirms that during training, unlike CE or FLSD-53, AdaFocal being aware of the network’s current under/over-confidence is able to guide the γs to the values that maintain a well calibrated model at every step. Also note that for an unseen dataset-model pair there’s no way to know beforehand which γ will perform better but these empirical evidence show that AdaFocal will automatically find those appropriate γs.
Rest of the experiments are shown in Table 1 (ECE) and Table 2 (Error)3. From Table 1, we observe that prior to temperature scaling AdaFocal outperforms the baseline methods by a substantial margin in 9 out of 11 cases. With post-temperature scaling included, AdaFocal achieves the lowest calibration error in 7 out of the 11 experiments. Further, observe that in many cases temperature scaling on top of AdaFocal does not offer any improvement (optimal temperature = 1). For the rest, the optimal temperature is close to 1 indicating that AdaFocal produces innately calibrated models during training itself. The consistency of AdaFocal across other calibration metrics is shown through AdaECE and classwise-ECE in Appendix F. ECEdebias (15 and 30 bins), ECEEW−sweep (equal-width), and ECEEM−sweep (equal-mass) are reported in Appendix G. Significance of the results is confirmed through ECE error bars with mean and standard deviations computed over 5 runs in Appendix H.
Number of bins The ECE metrics in the paper are reported using 15 bins. For AdaFocal training we experiment with 5, 10, 15, 20, 30, and 50 equal-mass (adaptive) binning when drawing calibration statistics form the validation set as reported in Appendix I. We find the best results to be from the range 10 to 20. Performance degrades when the number of bins are too small (< 10) or too large (> 20), therefore, for the AdaFocal training in the paper we use 15 bins as well.
Out-of-Distribution (OOD) detection. Following Mukhoti et al. (2020), we report the performance of AdaFocal on an OOD detection task. We train ResNet-110 and Wide-ResNet26-10 on
3While reproducing the baseline experiments in Mukhoti et al. (2020) we obtained very similar results, therefore, we simply borrow the exact values to maintain consistent comparison.
Dataset Model Cross Entropy Brier Loss MMCE LS-0.05 FLSD-53 AdaFocal
CIFAR-10 as the in-distribution data and test on SVHN Netzer et al. (2011) and CIFAR-10-C Hendrycks & Dietterich (2019) (with level 5 Gaussian noise corruption) as OOD data. Using entropy of the softmax as the measure of uncertainty, the corresponding ROC plots are shown in Figure 6 and AUROC scores are reported in Table 10 in Appendix J. We see that models trained with AdaFocal outperform focal loss γ = 3 (FL-3) and FLSD-53. For the exact AUROC scores, please refer to Appendix J. These results further highlight the benefit of an inherently calibrated model produced using AdaFocal as post-hoc techniques such as temperature scaling, as shown in the figure, is ineffective under distributional shift Snoek et al. (2019).
7 CONCLUSION
In this work, we first revisit the calibration properties of regular focal loss and highlight the downside of using a fixed γ for all samples. Particularly, by studying the calibration behaviour of different samples in different probability region, we find that there’s no single γ that achieves the best calibration over the entire region. We use this observation to motivate the selection of γ independently for each sample (or group of samples) based on the knowledge of network’s under/over-confidence. We propose a calibration-aware adaptive focal loss called AdaFocal that accounts for such information and updates the γt at every step based on γt−1 from the previous step and the magnitude of network’s under/over-confidence. We find AdaFocal to perform consistently better across different datasetmodel pairs producing innately calibrated models that most times do not substantially benefit from post-hoc processing of temperature scaling. Additionally, we find models trained with AdaFocal to be significantly better in out-of-distribution detection task.
Reproducibility For reproducibility, we have include in the supplementary material a zip file that contains the code base for running the experiments. For running particular experiments
• CIFAR-10, ResNet-50, Cross entropy: python train.py –dataset cifar10 –model resnet50 –loss cross_entropy –num_bins 15 -e 400 –save-path experiments/cifar10_resnet50_ce
• CIFAR-100, ResNet-50, Cross entropy: python train.py –dataset cifar100 –model resnet50 –loss cross_entropy –num_bins 15 -e 400 –save-path experiments/cifar100_resnet50_ce
• Tiny-ImageNet, ResNet-50, Cross entropy: python train.py –dataset tiny_imagenet –model resnet50_ti –loss cross_entropy –num_bins 15 –first-milestone 40 –secondmilestone 60 -e 100 -b 64 -tb 64 –dataset-root data/tiny-imagenet-200 –save-path experiments/tinyImageNet_resnet50_ce
• 20 Newgroups, CNN, Cross entropy: python main.py –loss cross_entropy –num-epochs 50 –num-bins 15 –save-path experiments/cnn_ce
APPENDICES
A ADAFOCAL’S GENERALIZATION TO LARGE SCALE DATASET (IMAGENET)
For ImageNet, FLSD-53 seems to perform very poorly in terms of calibration. The reason is that due to higher values of γ = 5, 3 FLSD-53 becomes extremely under-confident in each bin leading to a high calibration error. AdaFocal, on the hand, remains well calibrated which confirms that during training, unlike CE or FLSD-53, AdaFocal being aware of the network’s current under/overconfidence (through the validation set) is able to adjusts the γs in a way that maintains a well calibrated model at every step. Further, in Figure 8, note the dynamics/evolution of γt in different bins. For ImageNet, we find AdaFocal’s γ → 0 which makes sense because, from Figure 7, cross entropy (i.e. γ = 0 for every bin) is much better calibrated than FLSD-53 and AdaFocal (starting from γ = 1) settles down as CE (γ = 0). Note that for an unseen dataset-model pair it is not possible to know beforehand whether CE or focal loss will perform better. However, from these experiments, we find strong evidence that, for any dataset-model pair, AdaFocal will lead to the γs that result in the best calibration.
B CALIBRATION BEHAVIOUR OF FOCAL LOSS IN DIFFERENT BINS
In the main paper, we showed the calibration behavior of different focal losses for ResNet50 trained on CIFAR-10 for only a few bins. For completeness, the rest of the bins and their calibration error Ei = Cval,i − Aval,i are shown in Figure 9 for focal losses with γ = 0, 3, 4, 5. We observe that there’s no single γ that performs the best across all the bins. Rather, every bin seems to have a particular γ that achieves the best calibration.
C CORRESPONDENCE BETWEEN CONFIDENCE OF TRAINING AND VALIDATION SAMPLES
D CALFOCAL LOSS
E DATASETS AND EXPERIMENTS
E.1 DATASET DESCRIPTION
CIFAR-10 Krizhevsky (2009): This dataset contains 60, 000 coloured images of size 32 × 32, which are equally divided into 10 classes. A split of 45, 000/5, 000/10, 000 images is used as train/validation/test sets respectively.
CIFAR-100 Krizhevsky (2009): This dataset contains 60, 000 coloured images of size 32 × 32, which are equally divided into 100 classes. A split of 45, 000/5, 000/10, 000 images is used as train/validation/test sets respectively.
ImageNet Russakovsky et al. (2015): ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2012-2017 is an image classification and localization dataset. This dataset spans 1000 object classes and contains 1,281,167 training images and 50,000 validation images.
Tiny-ImageNet Deng et al. (2009): It is a subset of the ImageNet dataset with 64× 64 dimensional images and 200 classes. It has 500 images per class in the training set and 50 images per class in the validation set.
20 Newsgroups Lang (1995): This dataset contains 20, 000 news articles, categorised evenly into 20 different newsgroups. Some of the newsgroups are very closely related to each other (e.g. comp.sys.ibm.pc.hardware / comp.sys.mac.hardware), while others are highly unrelated (e.g misc.forsale / soc.religion.christian). We use a train/validation/test split of 15, 098/900/3, 999 documents.
E.2 EXPERIMENTAL DETAILS
For all our experiments, we have used Nvidia Titan X Pascal GPU with 12GB memory.
CIFAR-10 and CIFAR-100: We use SGD with a momentum of 0.9 as our optimiser, and train the networks for 350 epochs, with a learning rate of 0.1 for the first 150 epochs, 0.01 for the next 100 epochs, and 0.001 for the last 100 epochs. We use a training batch size of 128. The training data is augmented by applying random crops and random horizontal flips.
Tiny-ImageNet: We use SGD with a momentum of 0.9 as our optimiser, and train the models for 100 epochs with a learning rate of 0.1 for the first 40 epochs, 0.01 for the next 20 epochs and 0.001 for the last 40 epochs. We use a training batch size of 64. Note that we use 50 samples per class (i.e. a total of 10000 samples) from the training set as the validation set. Hence, the training is only on 90000 images. We use the Tiny-ImageNet validation set as our test set.
ImageNet: We use SGD as our optimiser with momentum of 0.9 and weight decay 10−4, and train the models for 90 epochs with a learning rate of 0.01 for the first 30 epochs, 0.001 for the next 30 epochs and 0.0001 for the last 30 epochs. We use a training batch size of 128. We divide the 50,000 validation images into validation and test set of 25,000 images each.
20 Newsgroups: We train the Global Pooling CNN Network Lin et al. (2014) using the Adam optimiser, with learning rate 0.001, and default betas 0.9 and 0.999. We used Glove word embeddings
Pennington et al. (2014) to train the network. We train the model for 50 epochs and use the model at the end to evaluate the performance.
All our experiments are implemented in PyTorch. The hyperparameters that are not explicitly mentioned above are set to their default values. For CIFAR-10/100 and Tiny-ImageNet, AdaFocal is implemented on top of the base code available from Mukhoti (2020). The code for 20 Newsgroups is implemented in PyTorch by adapting the code (TensorFlow) available from Kumar (2018).
The experimental results in the paper are reported for the model at the end of (1) CIFAR-10/100: 350 epochs, (2) Tiny-ImageNet: 100 epochs, (3) ImageNet: 90 epochs, and (4) 20 NewsGroups: 50 epochs.
F ADAECE AND CLASSWISE-ECE PERFORMANCE
Here, we compare the performance of AdaFocal against the baseline methods in terms of AdaECE and classwise-ECE in Table 3 and 4 respectively. For CIFAR-10/100, the values are reported for the model at the end of 350 epochs; for Tiny-ImageNet, at the end of 100 epochs; and for 20 NewsGroup dataset, at the end of 50 epochs. From these tables, we observe that AdaFocal outperforms all the baseline methods by a substantial margin, especially if we compare the pre-temperature scaling results.
Dataset Model Cross Entropy Brier Loss MMCE LS-0.05 FLSD-53 AdaFocal
Dataset Model Cross Entropy Brier Loss MMCE LS-0.05 FLSD-53 AdaFocal
Dataset Model Cross Entropy FLSD-53 AdaFocal
Dataset Model Cross Entropy FLSD-53 AdaFocal
Dataset Model Cross Entropy FLSD-53 AdaFocal
G DEBISED ESTIMATES OF ECE
Dataset Model Cross Entropy FLSD-53 AdaFocal
H ECE ERROR BARS
I NUMBER OF BINS USED DURING ADAFOCAL TRAINING
Experiment details: CIFAR-10, ResNet50 trained for 350 epochs. The reported results below are without temperature scaling. Our method AdaFocal with 5, 10, 15, 20, 30, and 50 adaptive (equal mass) bins vs FLSD-53. Note here that there are two types of binning:
J AUROC FOR OUT-OF-DISTRIBUTION DETECTION
For ResNet110 on CIFAR-10/SVHN, we were not able to reproduce the reported results of 96.74, 96.92 for FL-3 in Mukhoti et al. (2020). Instead we found those values to be 90.27, 90.39 and report them in Table 10
K MOVING AVERAGE γ-UPDATE RULE
For the focal loss in the paper L(pn, t) = −(1− pn)γb,t log pn, the unconstrained γ-update rule for AdaFocal is given by
γt+1 = γt ∗ exp(Cval,t+1 −Aval,t+1) (5) = γt ∗ exp(Eval,t+1) (6)
If instead we use exponential moving average to update γ, then the update rule (let’s call it MA-α) is given by
γt+1 = (γt) α ∗ ( eEval,t+1 )1−α (7)
= γαt−1 ∗ eαEval,t ∗ e(1−α)Eval,t+1 (8) = γαt−1 ∗ e[αEval,t+(1−α)Eval,t+1] (9)
The evolution or dynamics of γ is given in Figure 16.
L MULTIPLE RUNS OF ADAFOCAL WITH DIFFERENT γmax
Due to the stochastic nature of the experiments, AdaFocal γs may end up following different trajectories across different runs (initialization), which in turn might lead to variations in the final results. In this section, we look at the extent of such variations for (1) unconstrained γ (2) γ capped by γmax = 20 and (3) γ capped by γmax = 50. We study this for ResNet-50 trained multiple times on CIFAR-10 starting with a different random seed.
L.1 ADAFOCAL, γmax = 20
In Figure 17, we observe that AdaFocal with γmax = 20 is consistently (9 out of 9 times) better than FLSD-53. Figure 18 shows the evolution of γ across different runs.
L.2 ADAFOCAL, γmax = 50
In Figure 19, we observe that AdaFocal with γmax = 50 has more variability than AdaFocal γmax = 20 but is mostly better than FLSD-53. Figure 19 shows the evolution of γ across different runs.
L.3 ADAFOCAL, UNCONSTRAINED γ
In Figure 21, we observe that AdaFocal with unconstrained γ does exhibit some variability across different runs: 7 out of 9 times it performs better than FLSD-53 whereas the other two times it is similar or slightly worse.
The above behaviour is mostly due to the variations in the trajectory of γs for lower bins, as shown in Figure 22. For higher bins, we see the γs settling to similar values, however for lower bins, as the γs are unconstrained they blow up to very high values.
M ERROR, ECE AND BIN STATISTICS PLOTS FOR REST OF THE EXPERIMENTS | 1. What is the main contribution of the paper regarding model calibration?
2. What are the strengths of the proposed adaptive version of focal loss?
3. What are the weaknesses or concerns regarding the paper's approach, experiments, and conclusions?
4. Do you have any questions about the paper's content, such as figures, updates, and correspondence?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper considers the problem of model calibration. Existing works calibrates the model by post-hoc approaches or objective function tailored for calibration. The authors of the paper propose an adaptive version based on Focal loss, which regularizes the overconfidence of neural networks. They observe that although focal loss improves the calibration, it leaves out the under-confident samples. To mitigate the issue, they propose adjusting the hyper-parameter
γ
in focal loss according to the model's under/over-confidence. Experiments on vision and NLP classification tasks showcase the effectiveness of the adaptive version.
Review
Strong points
The paper proposes an adaptive version of focal loss for model calibration. The hyper-parameters in focal loss can dynamically adjust for the calibration criterion.
A similar validation set is used for updating the hyper-parameters. The updating rule is simple and can be easily plugged in existing methods.
The proposed method outperforms other baselines on extensive datasets
Weak points
One glaring issue of the current version is the readability of figures. Fig (1),(2),(4),(5) are difficult to parse. There are even lines with duplicate colors in the second-row figures of Fig (2).b.
"And from a calibration point of view, our strategy going forward would be to exploit this behavior to keep C_{train,true,i} (which we have control over during training) closer to A_{val,i} so that, in turn, C_{val,top,i} also stays closer to A_{val,i} to overall achieve low calibration error C_{val,top,i} − A_{val,i}". It seems to me there are two pitfalls in the sentence: (1) since the validation set is the actual target (claimed by the author on page 2), how can people make use of the accuracies on the validation set to tune models? (2) The correspondence in average confidence
C
can not be directly translated into the correspondence in the calibration error
|
C
−
A
|
.
It seems that the update rule in Eq. (4) is unstable. For example, if
C
−
A
>
k
for
m
steps, then
γ
≥
e
m
k
, which in turn makes the focal loss rather small. The authors propose to use a threshold to rein in the explosion. However, it seems to me that the explosion will commonly happen, and at most of the time
γ
=
γ
m
a
x
. Could the authors provide the dynamics of
γ
during the training process?
Minors
In the second paragraph from the bottom:
L
f
should be the Focal loss L_Focal?
It seems there is an extra "top" in the subtitle of Fig (1).b?
Bottom paragraph on page 4: the
n
-th sample should be in
i
-th bin?
Sentence above Eq (1): A_{val,b} instead of A_{val,i}
What's the y-axis in Fig .(3)? Classification error on validation set? |
ICLR | Title
AdaFocal: Calibration-aware Adaptive Focal Loss
Abstract
Much recent work has been devoted to the problem of ensuring that a neural network’s confidence scores match the true probability of being correct, i.e. the calibration problem. Of note, it was found that training with focal loss leads to better calibrated deep networks than cross-entropy, while achieving the same level of accuracy Mukhoti et al. (2020). This success stems from focal loss regularizing the entropy of the network’s prediction (controlled by the hyper-parameter γ), thereby reining in the network’s overconfidence. Further improvement is expected if γ is selected independently for each training sample. However, the proposed Sample-Dependent Focal Loss (FLSD) in Mukhoti et al. (2020) is based on simple heuristics that does not take into account the difference in the network’s calibration behaviour for different samples (or groups of samples). As a result it is only slightly better than focal loss with fixed γ. In this paper, we propose a calibration-aware version of FLSD, called AdaFocal, which, at every training step t, adaptively modifies the γ for individual group of samples based on (1) γt−1 from the previous training step (2) the magnitude of the network’s under/over-confidence for those groups. We evaluate our method on various small to large-scale image recognition tasks and one NLP task, covering a variety of network architectures, to confirm that AdaFocal consistently achieves improved calibration without a significant loss in accuracy. Further, the models trained with AdaFocal are shown to have significantly improved Out-of-Distribution (OOD) detection capability.
1 INTRODUCTION
Neural networks have found tremendous success in almost every field including computer vision, natural language processing, and speech recognition. Over time, these networks have grown complex and larger in size to achieve state-of-the-art performance and they continue to evolve further in that direction. However, it has been well established that such high capacity networks suffer from poor calibration Guo et al. (2017), i.e. the confidence scores of the predictions do not reflect the real world probabilities of those predictions being true. For example, if the network assigns 0.8 confidence to a set of predictions, we should expect 80% of those predictions to be correct. However, this is far from reality since modern networks tend to be grossly over-confident. This is of great concern, particularly for mission-critical applications such as autonomous driving, medical diagnosis, wherein the downstream decision making not only rely on the predictions but also on their confidence.
In recent years, there has been a growing interest in developing methods for calibrating neural networks. These can be mainly divided into two categories (1) post-hoc approaches that perform calibration after training (2) methods that calibrate the model during training itself. The first includes methods such as Platt scaling Platt (1999), histogram binning Zadrozny & Elkan (2001), Isotonic regression Zadrozny & Elkan (2002), Bayesian binning and averaging Naeini et al. (2015); Naeini & Cooper (2016), and Spline fitting Gupta et al. (2021). Methods in the second category focus on training the model on an objective function that accounts for calibration as well, including Maximum Mean Calibration Error (MMCE) Kumar et al. (2018), Label smoothing Müller et al. (2019), and recently focal loss Mukhoti et al. (2020). These methods aim to produce inherently calibrated models which when combined with post training calibration methods lead to further improvements.
Contribution. Our work falls into the second category. We build upon the calibration properties of focal loss to propose a modification that further improves its performance. Firstly, we make the observation that while regular focal loss, with a fixed γ parameter, improves the overall calibration by preventing samples from being over-confident, it also leaves other samples under-confident. To
address this drawback, we propose a modification to the focal loss called AdaFocal that adjusts the γ for each training sample (or rather a group of samples) separately by taking into account the model’s under/over-confidence about a similar corresponding group in the validation set. We evaluate the performance of our method on four image classification tasks: CIFAR-10, CIFAR-100, Tiny-ImageNet and ImageNet, and one text classification task: 20 Newsgroup, using various model architectures, and show that AdaFocal substantially outperforms the regular focal loss and other state-of-the-art calibration techniques in the literature. We further study the performance of AdaFocal on an out-of-distribution detection task and find it to perform better than the competing methods. Finally, we find that the models trained using AdaFocal get innately calibrated to a level that most times do not significantly benefit from temperature scaling.
2 PROBLEM SETUP AND DEFINITIONS
Consider a classification setting where we are given a set of training data {(xn, ytrue,n)}, with xn ∈ X being the input and ytrue,i ∈ Y = {1, 2, . . . ,K} the associated ground-truth label. Using this data we wish to train a classifier fθ(x) that outputs a vector p̂ over theK classes. We also assume access to a validation set for hyper-parameter tuning and a test set for evaluating its performance. For example, fθ(·) can be a neural network with learnable parameters θ, x is an image, and p̂ is the output of a softmax layer whose kth element p̂k is the probability score for class k. We refer to ŷ = argmaxk∈Y p̂k as the network’s prediction and the associated probability score p̂ŷ as the predicted confidence, and the same quantity for the jth example is p̂ŷ,j .
In this setting, a network is said to be perfectly calibrated if the predicted confidence p̂ŷ reflects the true probability of the network classifying x correctly i.e. P(ŷ = ytrue | p̂ŷ = p) = p, ∀p ∈ [0, 1] Guo et al. (2017). Continuing our example, if the network assigns an average confidence score of 0.8 to a set of predictions then we should expect 80% of those to be correct. We define Calibration Error as E = p̂ŷ − P(ŷ = ytrue | p̂ŷ) and the Expected Calibration Error as Ep̂ŷ [E ] = Ep̂ŷ [ |p̂ŷ − P(ŷ = ytrue | p̂ŷ)| ] Guo et al. (2017). However, as the true calibration error cannot be computed empirically with a finite sized dataset, the following three approximations are generally used in the literature. That is, for a dataset {(xn, ytrue,n)}Nn=1, (1) ECE = ∑M i=1 |Bi| N |Ci − Ai| Guo et al. (2017), where Bi is equal-width bin that contains all examples j with p̂ŷ,j in the range [ iM , i+1 M ), Ci = 1 |Bi| ∑ j∈Bi p̂ŷ,j is the average confidence and Ai = 1 |Bi| ∑ j∈Bi 1(ŷj = ytrue,j) is the bin accuracy. Note that Ei = Ci − Ai is the empirical approximation of the calibration error E , (2) AdaECE = ∑M i=1 |Bi| N |Ci − Ai| Nguyen & O’Connor (2015), where ∀i, j |Bi| = |Bj | are adaptively sized (equal-mass) bins that contain an equal number of samples, and (3) ClasswiseECE Kumar et al. (2018); Kull et al. (2019) estimates the calibration over all K classes: ClasswiseECE = 1 K ∑M i=1 ∑K k=1 |Bi,k| N |Ci,k −Ai,k| where Ci,k = 1 |Bi,k| ∑ j∈Bi,k p̂k,j is the average confidence for
the kth class and Ai,k = 1|Bi,k| ∑ j∈Bi,k 1(ytrue,j = k) is the accuracy of the kth class in the ith bin.
Lastly, as ECE has been shown to be a biased estimate of true calibration Vaicenavicius et al. (2019), we additionally use two de-biased estimates of ECE namely ECEdebiased proposed in Kumar et al. (2019) and ECEsweep proposed in Roelofs et al. (2021) to further confirm our results.
3 CALIBRATION PROPERTIES OF FOCAL LOSS
Focal loss Lin et al. (2017) LFL(p) = −(1 − p)γ log p was originally proposed to improve the accuracy of classifiers by focusing on hard examples and down-weighting well classified examples. Recently it was further shown that focal loss may also result in significantly better calibrated models than cross entropy Mukhoti et al. (2020). This is because, based on the relation: LFL ≥ KL(q||p̂)− γH(p̂) where q is the one-hot target vector, focal loss while minimising the main KL divergence objective also increases the entropy of the prediction p̂. As a consequence this prevents the network from being overly confident on wrong predictions and overall improves calibration.
The regular focal loss with fixed γ, as we show in this section, does not achieve the best calibration. In Figure 1, we plot the calibration behaviour of ResNet50 in different bins when trained on CIFAR-10 with different focal losses. The ith bin’s calibration error subscripted by "val"Eval,i = Cval,i−Aval,i is computed on the validation set using 15 equal-mass binning. The figure shows the lowest (bin-0), a middle (bin-7) and highest bin (bin-7). For reference, the rest of the bins and their bin boundaries are
shown in Appendix B. From Figure 1 (a), we see that although focal loss γ = 4 achieves the overall lowest calibration error (AdaECE), there’s no single γ that performs the best across all the bins. For example, in bin-0 γ = 4, 5 seems to achieve better calibration whereas γ = 0, 3 are over-confident. For bin-7, on the other hand, γ = 3 seems to be better calibrated whereas γ = 4, 5 are under-confident and γ = 0 is over-confident.
This clearly indicates that using different γs for different bins can further improve the calibration. Such an attempt is presented in Mukhoti et al. (2020) called the Sample-Dependent Focal Loss (FLSD-53) which assigns γ = 5 if the training sample’s true class posterior p̂ytrue ∈ [0, 0.2) and γ = 3 if p̂ytrue ∈ [0.2, 1]. However, this strategy is fixed for every dataset-model pair and is based on simple heuristics of choosing higher γ for smaller values of p̂ytrue and relatively lower γ for higher values of p̂ytrue . However, from Figure 1(b), we see that FLSD-53 is also not the best strategy across all the bins. This, therefore, motivates the design of a γ selection strategy that can assign an appropriate γ for each bin based on the magnitude and sign of Eval,i. However, in order to design such a strategy we need solutions to the following two major challenges:
1. How do we find some correspondence between the "confidence of training samples", which we can manipulate during training by adjusting the entropy regularising parameter γ, and the "confidence of the validation samples", which we want to be actually manipulated but do not have direct control over? In other words, in order to indirectly control the confidence of a particular group of validation samples, how do we know which particular group of training samples’ confidence to be manipulated?
2. Given that there is a correspondence between a training group and a validation group (even if it’s loose), how do we arrive at the exact values of γ that will lead to better calibration?
We try to answer the first question in the next section and the answer to the second question leads to AdaFocal which is the main contribution of the paper.
4 CORRESPONDENCE BETWEEN CONFIDENCE OF TRAIN AND VAL. SAMPLES
In order to find some correspondence, an intuitive thing to do would be to group the validation samples into M equal-mass validation-bins, and then use these validation-bin boundaries to group the training samples as well. Then, we can compare the average confidence of the validation samples and the average confidence of the training samples, in the same validation-bin, to check for any correspondence.
Quantities of interest For binning validation samples, we always look at the confidence of the top predicted class ŷ denoted by p̂val,top (bin average: Cval,top). For training samples, on the other hand, instead of the confidence of the top predicted class ŷ denoted by p̂train,top (bin average: Ctrain,top), we will focus on the confidence of the true class ytrue denoted by p̂train,true (average: Ctrain,true) because during training we only care about p̂train,true which is manipulated through some loss function. For reference however, Figure 10 in Appendix C compares Ctrain,true and Ctrain,top to show that as the training set accuracy approaches 100%, the top predicted class and the true class for a training sample become the same. Henceforth, for a cleaner notation, we will always refer to Ctrain ≡ Ctrain,true and Cval ≡ Cval,top.
Common binning When training samples are grouped using the bin boundaries of the validationbins. In Figure 2(b), we compareCtrain,i in validation-bin-i 1 withCval,i in the same validation-bin-i, and find that there is indeed a good correspondence between the two quantities. For example in Figure 2(b), as γ increases from 0, 3 to 5, the solid-line (Ctrain,i) gets lower, and the same behaviour is observed on the starred-line (Cval,i) as well. For completeness, rest of the bins are shown in Figure 12 Appendix C. This is very encouraging as now we can expect (even though loosely) that if we increase/decrease the confidence of a group of training samples in some lower (or middle, or higher) probability region then the same will be reflected on a similar group of validation samples in lower (or middle, or higher) probability region. This therefore provides a way to indirectly control the value of Cval,i by manipulating Ctrain,i, and from a calibration point of view, our strategy going forward would be to exploit this correspondence to keep Ctrain,i (which we have control over during training) closer to Aval,i (the validation set accuracy in validation-bin-i) so that Cval,i also stays closer to Aval,i to overall reduce the calibration error Eval,i = Cval,i −Aval,i.
Independent binning Before proceeding, for completeness, we also look at the case when training samples and validation samples are grouped independently into their respective training-bins and validation-bins. Figure 2(a) compares Ctrain,i in training-bin-i with Cval,i in validation-bin-i. We observe a similar behaviour as mentioned above. Note that since the binning is independent, the boundaries of training-bin-imay not be exactly the same as that of validation-bin-i, however as shown in Figure 11 Appendix C (along with rest of the bins and their bin boundaries), they are quite close, meaning that a training group in lower (/middle/higher) probability region have good correspondence with the validation group in a similar nearby region.
Going forward, for the ease of algorithm design, we will simply stick to the case of "common binning" where training samples are grouped as per validation-bin boundaries. This will allows us to maintain a one-to-one correspondence between the boundaries of the ith training and validation group.
5 PROPOSED METHOD
Let’s denote the nth training sample’s true class posterior p̂ytrue by pn. Given that pn falls into validation-bin-b, our goal is to keep pn, or as per the discussion above its averaged equivalent Ctrain,b, closer to Aval,b so that the same is reflected on Cval,b. For manipulating pn, we will utilize the regularization effect that focal loss’s parameter γ has on the confidence of the predictions Mukhoti
1It may happen that no training sample belong to a particular validation-bin’s boundaries. In that case, Ctrain,i has been shown to drop to zero for example in bin-14 in Figure 2 (b).
et al. (2020). At this point, one can choose to update γb either based on (1) how far pn is from Aval,b i.e. γ = f(pn − Aval,b) or (2) how far Cval,b is from Aval,b i.e. γ = f(Cval,b − Aval,b). Such a γ-update-rule should ensure that whenever the model is over-confident, i.e. pn > Aval,b (or Cval,b > Aval,b), γ is increased so that the gradients get smaller which prevents pn from increasing further. On the other hand, when pn < Aval,b (or Cval,b < Aval,b), i.e. the model is under-confident, we decrease γ so as to get larger gradients that in turn will increase pn 2.
Based on this discussion, next we design and study a calibration-aware γ-update strategy called CalFocal, which with some additional modifications lead to AdaFocal.
5.1 CALIBRATION AWARE FOCAL LOSS (CALFOCAL)
Case 1: γ = f(pn −Aval,b) Treating Aval,b as the point that we want pn to not deviate from, we make the focal loss parameter γ a function of pn −Aval,b to get
LCalFocal(pn) = −(1− pn)γn log pn, with, γn = exp(λ(pn −Aval,b)), (1)
where, b is the validation-bin in which pn falls. The hyper-parameter λ is the scaling factor which combined with the exponential function helps to quickly ramp up/down γ. The exponential function adheres to the γ-update rule mentioned earlier and also ensures γ is > 0. Figure 3(a) plots LCalFocal vs. pn for Aval,b = 0.8. We see that based on the strength of λ, the loss drastically drops near pn = 0.8 and thereafter remains close to zero. This shows that LCalFocal aims is to first push p towards 0.8 and then slow its growth towards overconfidence. Next, in Figure 3(c), we find that CalFocal with λ = 10, 100 is able to reduce the calibration error compared to cross entropy but it is still far from FLSD-53’s performance. Also note in Figure 3(b) that too high λ (=100) affects the accuracy of model. Most importantly, Figure 3(d) compares Ctrain,i with Cval,i (and also Aval,i) for bin-0, where we find some evidence that the strategy of bringing pn or Ctrain,i (solid lines) closer to Aval,i (dashed lines) results in Cval,i (starred lines) getting closer to Aval,i as well, thus reducing the calibration error Eval,i = Cval,i −Aval,i slightly.
Case 2: γ = f(Cval,b − Aval,b) Note that Eq. 1 assigns a different γn for each training sample. To reduce computation and avoid using a different γn for each training sample, one can instead use a common γb for all the training samples that fall into the validation-bin-b by simply making it a function of Cval,b −Aval,b instead of pn −Aval,b.
LCalFocal(pn) = −(1− pn)γb log pn, with, γb = exp(λ(Cval,b −Aval,b)) (2)
where, b is the validation-bin in which pn falls. As shown in Appendix D, it’s performance is very similar (or slightly better than) CalFocal in Eq. 1. Further, it makes more sense to update γ based on how far Cval,b is from Aval,b instead of how far pn is from Aval,b because, as shown in Figure 3(d) bin-0, one may find Cval,b (starred lines) quite closer to Aval,b (dashed lines) even when pn or its
2Note that for focal loss increasing γ does not always lead to smaller gradients. This mostly holds true in the region pn approximately > 0.2 (see Figure 3(a) in Mukhoti et al. (2020)). However, in practice and as shown by the training-bin boundaries of bin-0 and bin-1 in Figure in Figure 11 Appendix C, we find majority of the training samples to lie above 0.2 during the majority of the training, and therefore, for the experiments in this paper, we simply stick to the rule of increasing γ to decrease gradients and stop pn from increasing and vice versa.
average equivalent Ctrain (solid lines) is far from Aval,b. At this point when Cval,b = Aval,b, we should stop updating γ further, even though pn −Aval,b 6= 0, as we have reached our goal of making Eval,b = Cval,b −Aval,b = 0. Therefore, we use Eq. 2 of Case 2 as base for AdaFocal.
Limitations of CalFocal: (1) Let’s say at some point of training, a high γb over the next few epochs reduces the calibration error Eval,b = Cval,b −Aval,b. Then, it is desirable to continue the training with the same high γb. However, note CalFocal’s update rule in Eq. 2 which will reduce γ → 1 as the Cval,b − Aval,b → 0. (2) At some point let’s say Cval,b − Aval,b is quite high. This will set γb to some high value as well depending on the hyper-parameter λ. Assuming this γb is still not high enough to bring down the confidence, we would want a way to further increase γb. However, CalFocal is incapable of doing so as it will continue to hold at γb = exp(λ(Cval,b − Aval,b)). By addressing these two issues in the next sub-section we present the final algorithm for AdaFocal.
5.2 CALIBRATION-AWARE ADAPTIVE FOCAL LOSS (ADAFOCAL)
A straightforward way to address the above limitations is to make γb,t depend on γb,t−1 i.e.
L(pn, t) = −(1− pn)γb,t log pn, with, γb,t = γb,t−1 ∗ exp(Cval,b −Aval,b). (3) This update rule address the limitations of CalFocal in the following way. Let’s say at some point we observe over-confidence i.e. Eval,b = Cval,b − Aval,b > 0. Then, in the next step γb will be increased. In the subsequent steps, it will continue to increase unless the calibration error Eval,b starts decreasing (this additional increase in γ was not possible with CalFocal). At this point, if we find Eval,b to start decreasing, that would reduce the increase in γb over the next epochs and γb will ultimately settle down to a value when Eval,b = 0 (CalFocal at Eval,b = 0 will cause γ to go down to 1). Next, if this current value of γb starts causing under-confidence i.e. Cval,b −Aval,b < 0, then the update rule will kick in to reduce γ thus allowing Cval,b to be increased back to Aval,b. This oscillating behaviour of AdaFocal around the desired point of Cval,b = Aval,b is its main adavantage in reducing calibration error in every bin. Additionally, also note the absence of the hyper-parameter λ in the exponent of Eq. 3 which makes AdaFocal hyper-parameter free.
Finally, note an undesirable property of Eq. 3 which is the unbounded exponential update. This may easily cause γt to explode as it can be expanded as γt = γt−1 exp(Eval,t) = γ0 exp(Eval,0 + Eval,1 + ... + Eval,t−1 + Eval,t). Thus if Eval,t > 0 for quite a few number of epochs, γt will become so large that even if Eval,t < 0 in the subsequent epochs, it may not decrease to a desired level. We remedy this by simply constraining γt to an upper bound γmax to get the AdaFocal loss as
LAdaFocal(pn, t) = −(1− pn)γb,t log pn, with, γb,t = min{γmax, γb,t−1 ∗ eCval,b−Aval,b} (4) An algorithmic description of training with AdaFocal (or CalFocal) is given in Algorithm 1. Limitation: One may argue that γmax is again a hyper-parameter; however, note that it does not require any special fine-tuning. Its sole purpose is to stop γ from exploding and any reasonable value around 20 works quite well in practice. For all our experiments, we use γmax = 20. For comparison of AdaFocal with γmax = 20, γmax = 50 and unconstrained γmax =∞, please refer to Appendix L.
6 EXPERIMENTS
Experimental setup We evaluate the performance of our proposed method on image and text classification tasks. For image classification, we use CIFAR-10, CIFAR-100 Krizhevsky (2009), Tiny-ImageNet Deng et al. (2009), and ImageNet Russakovsky et al. (2015) to analyze the calibration of ResNet50, ResNet-100 He et al. (2016), Wide-ResNet-26-10 Zagoruyko & Komodakis (2016), and DenseNet-121 Huang et al. (2017) models. For text classification, we use the 20 Newsgroup dataset Lang (1995) and train the Global Pooling CNN model Lin et al. (2014). Further details about the datasets, models and experimental configurations are given in Appendix E.
Baseline As baseline calibration methods we use MMCE Kumar et al. (2018), Brier loss Brier (1950), Label smoothing Müller et al. (2019) and sample-dependent focal loss FLSD-53. We also report the effect of temperature scaling Guo et al. (2017) on top of these calibration methods. Following Mukhoti et al. (2020), we select the optimal temperature that produces the minimum ECE on the validation set by searching in the interval (0, 10] with step size of 0.1.
Results. In Figure 4, we compare AdaFocal against cross entropy (CE) and FLSD-53 for ResNet-50 trained on various small to large-scale image datasets. We chose FLSD-53 as our competitive baseline
as it was shown to be consistently better than MMCE, Brier Loss and Label smoothing Mukhoti et al. (2020) across many datasets-model pairs. The figure plots the test set error and ECE calibration metric. In Figure 5, for ResNet-50 on CIFAR-10 and ImageNet, we plot (1) the calibration statistics Eval = Cval − Aval of the validation set and (2) the dynamics of associated γt used by AdaFocal during the training for a few bins covering lower, middle, and higher probability regions.
From these figures, we first observe that for CIFAR-10, CIFAR-100 and Tiny-ImageNet, FLDS-53 is much better calibrated than CE. This is because, as shown in Figure 5(a) for ResNet-50 and CIFAR-10, CE is over-confident compared to FLSD-53 in every bin. For ImageNet, however, the behaviour is reversed: FLSD-53 is poorly calibrated than CE. The reason, as shown in Figure 5(b), is that due to the use of high values γ = 5, 3, FLSD-53 makes the model largely under-confident in each bin, leading to an overall high calibration error. This shows that FLSD-53 is a strategy based on heuristic (from a limited number of dataset-model pairs) that does not generalize well. AdaFocal, on the other hand, is well calibrated for all the four dataset-model pairs while achieving similar accuracy.
The dynamics/evolution of γt during training for different bins is shown in Figure 5: (1) for CIFAR10, we find γt to be closer to 1 for higher bins and closer to 20 for lower bin. These γs found by AdaFocal result in better calibration than γ = 5, 3 of FLSD-53. (2) for ImageNet, we find AdaFocal’s
γ → 0. This makes sense because for ImageNet, from Figure 4(d), cross entropy (i.e. γ = 0 for every bin) is much better calibrated than FLSD-53 and AdaFocal (starting from γ = 1) also ultimately settles down to CE (γ = 0) to achieve a similar level of calibration. This confirms that during training, unlike CE or FLSD-53, AdaFocal being aware of the network’s current under/over-confidence is able to guide the γs to the values that maintain a well calibrated model at every step. Also note that for an unseen dataset-model pair there’s no way to know beforehand which γ will perform better but these empirical evidence show that AdaFocal will automatically find those appropriate γs.
Rest of the experiments are shown in Table 1 (ECE) and Table 2 (Error)3. From Table 1, we observe that prior to temperature scaling AdaFocal outperforms the baseline methods by a substantial margin in 9 out of 11 cases. With post-temperature scaling included, AdaFocal achieves the lowest calibration error in 7 out of the 11 experiments. Further, observe that in many cases temperature scaling on top of AdaFocal does not offer any improvement (optimal temperature = 1). For the rest, the optimal temperature is close to 1 indicating that AdaFocal produces innately calibrated models during training itself. The consistency of AdaFocal across other calibration metrics is shown through AdaECE and classwise-ECE in Appendix F. ECEdebias (15 and 30 bins), ECEEW−sweep (equal-width), and ECEEM−sweep (equal-mass) are reported in Appendix G. Significance of the results is confirmed through ECE error bars with mean and standard deviations computed over 5 runs in Appendix H.
Number of bins The ECE metrics in the paper are reported using 15 bins. For AdaFocal training we experiment with 5, 10, 15, 20, 30, and 50 equal-mass (adaptive) binning when drawing calibration statistics form the validation set as reported in Appendix I. We find the best results to be from the range 10 to 20. Performance degrades when the number of bins are too small (< 10) or too large (> 20), therefore, for the AdaFocal training in the paper we use 15 bins as well.
Out-of-Distribution (OOD) detection. Following Mukhoti et al. (2020), we report the performance of AdaFocal on an OOD detection task. We train ResNet-110 and Wide-ResNet26-10 on
3While reproducing the baseline experiments in Mukhoti et al. (2020) we obtained very similar results, therefore, we simply borrow the exact values to maintain consistent comparison.
Dataset Model Cross Entropy Brier Loss MMCE LS-0.05 FLSD-53 AdaFocal
CIFAR-10 as the in-distribution data and test on SVHN Netzer et al. (2011) and CIFAR-10-C Hendrycks & Dietterich (2019) (with level 5 Gaussian noise corruption) as OOD data. Using entropy of the softmax as the measure of uncertainty, the corresponding ROC plots are shown in Figure 6 and AUROC scores are reported in Table 10 in Appendix J. We see that models trained with AdaFocal outperform focal loss γ = 3 (FL-3) and FLSD-53. For the exact AUROC scores, please refer to Appendix J. These results further highlight the benefit of an inherently calibrated model produced using AdaFocal as post-hoc techniques such as temperature scaling, as shown in the figure, is ineffective under distributional shift Snoek et al. (2019).
7 CONCLUSION
In this work, we first revisit the calibration properties of regular focal loss and highlight the downside of using a fixed γ for all samples. Particularly, by studying the calibration behaviour of different samples in different probability region, we find that there’s no single γ that achieves the best calibration over the entire region. We use this observation to motivate the selection of γ independently for each sample (or group of samples) based on the knowledge of network’s under/over-confidence. We propose a calibration-aware adaptive focal loss called AdaFocal that accounts for such information and updates the γt at every step based on γt−1 from the previous step and the magnitude of network’s under/over-confidence. We find AdaFocal to perform consistently better across different datasetmodel pairs producing innately calibrated models that most times do not substantially benefit from post-hoc processing of temperature scaling. Additionally, we find models trained with AdaFocal to be significantly better in out-of-distribution detection task.
Reproducibility For reproducibility, we have include in the supplementary material a zip file that contains the code base for running the experiments. For running particular experiments
• CIFAR-10, ResNet-50, Cross entropy: python train.py –dataset cifar10 –model resnet50 –loss cross_entropy –num_bins 15 -e 400 –save-path experiments/cifar10_resnet50_ce
• CIFAR-100, ResNet-50, Cross entropy: python train.py –dataset cifar100 –model resnet50 –loss cross_entropy –num_bins 15 -e 400 –save-path experiments/cifar100_resnet50_ce
• Tiny-ImageNet, ResNet-50, Cross entropy: python train.py –dataset tiny_imagenet –model resnet50_ti –loss cross_entropy –num_bins 15 –first-milestone 40 –secondmilestone 60 -e 100 -b 64 -tb 64 –dataset-root data/tiny-imagenet-200 –save-path experiments/tinyImageNet_resnet50_ce
• 20 Newgroups, CNN, Cross entropy: python main.py –loss cross_entropy –num-epochs 50 –num-bins 15 –save-path experiments/cnn_ce
APPENDICES
A ADAFOCAL’S GENERALIZATION TO LARGE SCALE DATASET (IMAGENET)
For ImageNet, FLSD-53 seems to perform very poorly in terms of calibration. The reason is that due to higher values of γ = 5, 3 FLSD-53 becomes extremely under-confident in each bin leading to a high calibration error. AdaFocal, on the hand, remains well calibrated which confirms that during training, unlike CE or FLSD-53, AdaFocal being aware of the network’s current under/overconfidence (through the validation set) is able to adjusts the γs in a way that maintains a well calibrated model at every step. Further, in Figure 8, note the dynamics/evolution of γt in different bins. For ImageNet, we find AdaFocal’s γ → 0 which makes sense because, from Figure 7, cross entropy (i.e. γ = 0 for every bin) is much better calibrated than FLSD-53 and AdaFocal (starting from γ = 1) settles down as CE (γ = 0). Note that for an unseen dataset-model pair it is not possible to know beforehand whether CE or focal loss will perform better. However, from these experiments, we find strong evidence that, for any dataset-model pair, AdaFocal will lead to the γs that result in the best calibration.
B CALIBRATION BEHAVIOUR OF FOCAL LOSS IN DIFFERENT BINS
In the main paper, we showed the calibration behavior of different focal losses for ResNet50 trained on CIFAR-10 for only a few bins. For completeness, the rest of the bins and their calibration error Ei = Cval,i − Aval,i are shown in Figure 9 for focal losses with γ = 0, 3, 4, 5. We observe that there’s no single γ that performs the best across all the bins. Rather, every bin seems to have a particular γ that achieves the best calibration.
C CORRESPONDENCE BETWEEN CONFIDENCE OF TRAINING AND VALIDATION SAMPLES
D CALFOCAL LOSS
E DATASETS AND EXPERIMENTS
E.1 DATASET DESCRIPTION
CIFAR-10 Krizhevsky (2009): This dataset contains 60, 000 coloured images of size 32 × 32, which are equally divided into 10 classes. A split of 45, 000/5, 000/10, 000 images is used as train/validation/test sets respectively.
CIFAR-100 Krizhevsky (2009): This dataset contains 60, 000 coloured images of size 32 × 32, which are equally divided into 100 classes. A split of 45, 000/5, 000/10, 000 images is used as train/validation/test sets respectively.
ImageNet Russakovsky et al. (2015): ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2012-2017 is an image classification and localization dataset. This dataset spans 1000 object classes and contains 1,281,167 training images and 50,000 validation images.
Tiny-ImageNet Deng et al. (2009): It is a subset of the ImageNet dataset with 64× 64 dimensional images and 200 classes. It has 500 images per class in the training set and 50 images per class in the validation set.
20 Newsgroups Lang (1995): This dataset contains 20, 000 news articles, categorised evenly into 20 different newsgroups. Some of the newsgroups are very closely related to each other (e.g. comp.sys.ibm.pc.hardware / comp.sys.mac.hardware), while others are highly unrelated (e.g misc.forsale / soc.religion.christian). We use a train/validation/test split of 15, 098/900/3, 999 documents.
E.2 EXPERIMENTAL DETAILS
For all our experiments, we have used Nvidia Titan X Pascal GPU with 12GB memory.
CIFAR-10 and CIFAR-100: We use SGD with a momentum of 0.9 as our optimiser, and train the networks for 350 epochs, with a learning rate of 0.1 for the first 150 epochs, 0.01 for the next 100 epochs, and 0.001 for the last 100 epochs. We use a training batch size of 128. The training data is augmented by applying random crops and random horizontal flips.
Tiny-ImageNet: We use SGD with a momentum of 0.9 as our optimiser, and train the models for 100 epochs with a learning rate of 0.1 for the first 40 epochs, 0.01 for the next 20 epochs and 0.001 for the last 40 epochs. We use a training batch size of 64. Note that we use 50 samples per class (i.e. a total of 10000 samples) from the training set as the validation set. Hence, the training is only on 90000 images. We use the Tiny-ImageNet validation set as our test set.
ImageNet: We use SGD as our optimiser with momentum of 0.9 and weight decay 10−4, and train the models for 90 epochs with a learning rate of 0.01 for the first 30 epochs, 0.001 for the next 30 epochs and 0.0001 for the last 30 epochs. We use a training batch size of 128. We divide the 50,000 validation images into validation and test set of 25,000 images each.
20 Newsgroups: We train the Global Pooling CNN Network Lin et al. (2014) using the Adam optimiser, with learning rate 0.001, and default betas 0.9 and 0.999. We used Glove word embeddings
Pennington et al. (2014) to train the network. We train the model for 50 epochs and use the model at the end to evaluate the performance.
All our experiments are implemented in PyTorch. The hyperparameters that are not explicitly mentioned above are set to their default values. For CIFAR-10/100 and Tiny-ImageNet, AdaFocal is implemented on top of the base code available from Mukhoti (2020). The code for 20 Newsgroups is implemented in PyTorch by adapting the code (TensorFlow) available from Kumar (2018).
The experimental results in the paper are reported for the model at the end of (1) CIFAR-10/100: 350 epochs, (2) Tiny-ImageNet: 100 epochs, (3) ImageNet: 90 epochs, and (4) 20 NewsGroups: 50 epochs.
F ADAECE AND CLASSWISE-ECE PERFORMANCE
Here, we compare the performance of AdaFocal against the baseline methods in terms of AdaECE and classwise-ECE in Table 3 and 4 respectively. For CIFAR-10/100, the values are reported for the model at the end of 350 epochs; for Tiny-ImageNet, at the end of 100 epochs; and for 20 NewsGroup dataset, at the end of 50 epochs. From these tables, we observe that AdaFocal outperforms all the baseline methods by a substantial margin, especially if we compare the pre-temperature scaling results.
Dataset Model Cross Entropy Brier Loss MMCE LS-0.05 FLSD-53 AdaFocal
Dataset Model Cross Entropy Brier Loss MMCE LS-0.05 FLSD-53 AdaFocal
Dataset Model Cross Entropy FLSD-53 AdaFocal
Dataset Model Cross Entropy FLSD-53 AdaFocal
Dataset Model Cross Entropy FLSD-53 AdaFocal
G DEBISED ESTIMATES OF ECE
Dataset Model Cross Entropy FLSD-53 AdaFocal
H ECE ERROR BARS
I NUMBER OF BINS USED DURING ADAFOCAL TRAINING
Experiment details: CIFAR-10, ResNet50 trained for 350 epochs. The reported results below are without temperature scaling. Our method AdaFocal with 5, 10, 15, 20, 30, and 50 adaptive (equal mass) bins vs FLSD-53. Note here that there are two types of binning:
J AUROC FOR OUT-OF-DISTRIBUTION DETECTION
For ResNet110 on CIFAR-10/SVHN, we were not able to reproduce the reported results of 96.74, 96.92 for FL-3 in Mukhoti et al. (2020). Instead we found those values to be 90.27, 90.39 and report them in Table 10
K MOVING AVERAGE γ-UPDATE RULE
For the focal loss in the paper L(pn, t) = −(1− pn)γb,t log pn, the unconstrained γ-update rule for AdaFocal is given by
γt+1 = γt ∗ exp(Cval,t+1 −Aval,t+1) (5) = γt ∗ exp(Eval,t+1) (6)
If instead we use exponential moving average to update γ, then the update rule (let’s call it MA-α) is given by
γt+1 = (γt) α ∗ ( eEval,t+1 )1−α (7)
= γαt−1 ∗ eαEval,t ∗ e(1−α)Eval,t+1 (8) = γαt−1 ∗ e[αEval,t+(1−α)Eval,t+1] (9)
The evolution or dynamics of γ is given in Figure 16.
L MULTIPLE RUNS OF ADAFOCAL WITH DIFFERENT γmax
Due to the stochastic nature of the experiments, AdaFocal γs may end up following different trajectories across different runs (initialization), which in turn might lead to variations in the final results. In this section, we look at the extent of such variations for (1) unconstrained γ (2) γ capped by γmax = 20 and (3) γ capped by γmax = 50. We study this for ResNet-50 trained multiple times on CIFAR-10 starting with a different random seed.
L.1 ADAFOCAL, γmax = 20
In Figure 17, we observe that AdaFocal with γmax = 20 is consistently (9 out of 9 times) better than FLSD-53. Figure 18 shows the evolution of γ across different runs.
L.2 ADAFOCAL, γmax = 50
In Figure 19, we observe that AdaFocal with γmax = 50 has more variability than AdaFocal γmax = 20 but is mostly better than FLSD-53. Figure 19 shows the evolution of γ across different runs.
L.3 ADAFOCAL, UNCONSTRAINED γ
In Figure 21, we observe that AdaFocal with unconstrained γ does exhibit some variability across different runs: 7 out of 9 times it performs better than FLSD-53 whereas the other two times it is similar or slightly worse.
The above behaviour is mostly due to the variations in the trajectory of γs for lower bins, as shown in Figure 22. For higher bins, we see the γs settling to similar values, however for lower bins, as the γs are unconstrained they blow up to very high values.
M ERROR, ECE AND BIN STATISTICS PLOTS FOR REST OF THE EXPERIMENTS | 1. What is the focus and contribution of the paper on calibration via regularization during training?
2. What are the strengths of the proposed approach, particularly in terms of its intuitive nature and empirical validation?
3. What are the weaknesses of the paper, especially regarding its presentation and potential biases in the experimental results?
4. Do you have any concerns or suggestions regarding the methodology used in the paper, such as the updating mechanism for the focal loss parameter?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
The authors propose a new focal loss method of calibrating a model via regularization during training. They propose an adaptive method of adjusting the focal loss parameter by empirical validation performance which adjusts the parameter based on specific bin over/under confidence.
Review
Pros
The authors consider a wide variety of recent calibration literature.
The approach is intuitive and is verified by empirical results showing that the patterns between validation and training ECE seem to hold.
The authors motivate the need by empirically showing that a single focal loss parameter is not optimal among different bins.
The experiments are extensive and cover important calibration and OOD metrics.
Cons
The axes and legends in figure 1 are too small and very hard to read
The axes are unlabeled in figure 2a
Section 5: “this mostly holds true”
→
It is not clear what “this” refers to. From the context it appears that “this” refers to “focal loss not leading to smaller gradients.” It appears the next sentence leads to the opposite conclusion which is confusing.
Figure 5 is too cluttered and extremely hard to read
Table 1 and the preceding paragraph: In both places, the text states that AdaFocal’s performance is averaged over 5 runs, which implies that the baselines are not. How many runs were the baselines trained for?
The authors highlight that AdaFocal gives the best performance among the “Pre T” results, but isn’t it also the case that AdaFocal is the only algorithm which has made use of the validation data at this point? This would explain why there is not much improvement with temperature scaling and AdaFocal.
The method of updating
γ
is based on summing the exponents together which may lead to large values which are mitigated by clamping to a maximum value. Have the authors considered using an exponential moving average of the exponents such that the argument to the exponential is of the form
α
E
v
a
l
,
t
+
(
1
−
α
)
E
v
a
l
,
t
+
1
? I believe this would prevent the instability of the sum and not require any clamping to a maximum value.
Minor
Page 3, section 4: The sentence which begins with “Nonetheless, for completeness…” is missing a prenthesis and probably also an unnecessary comma which makes the sentence confusing to read.
Section 4: approaches towards
→
approaches.
Section 5: During majority
→
during the majority
Section 5.1: hyperparameter
γ
→
hyperparameter
γ
Section 5.2: this update rule do
→
this updates rules does
Section 5.2: reign in
→
rein in |
ICLR | Title
Reliable Adversarial Distillation with Unreliable Teachers
Abstract
In ordinary distillation, student networks are trained with soft labels (SLs) given by pretrained teacher networks, and students are expected to improve upon teachers since SLs are stronger supervision than the original hard labels. However, when considering adversarial robustness, teachers may become unreliable and adversarial distillation may not work: teachers are pretrained on their own adversarial data, and it is too demanding to require that teachers are also good at every adversarial data queried by students. Therefore, in this paper, we propose reliable introspective adversarial distillation (IAD) where students partially instead of fully trust their teachers. Specifically, IAD distinguishes between three cases given a query of a natural data (ND) and the corresponding adversarial data (AD): (a) if a teacher is good at AD, its SL is fully trusted; (b) if a teacher is good at ND but not AD, its SL is partially trusted and the student also takes its own SL into account; (c) otherwise, the student only relies on its own SL. Experiments demonstrate the effectiveness of IAD for improving upon teachers in terms of adversarial robustness.
1 INTRODUCTION
Deep Neural Networks (DNNs) have shown excellent performance on a range of tasks in computer vision (He et al., 2016) and natural language processing (Devlin et al., 2019). Nevertheless, Szegedy et al. (2014); Goodfellow et al. (2015) demonstrated that DNNs could be easily fooled by adding a small number of perturbations on natural examples, which increases the concerns on the robustness of DNNs in the trustworthy-sensitive areas, e.g., finance (Kumar et al., 2020) and autonomous driving (Litman, 2017). To overcome this problem, adversarial training (Goodfellow et al., 2015; Madry et al., 2018) is proposed and has shown effectiveness to acquire the adversarially robust DNNs.
Most existing adversarial training approaches focus on learning from data directly. For example, the popular adversarial training (AT) (Madry et al., 2018) leverages multi-step projected gradient descent (PGD) to generate the adversarial examples and feed them into the standard training. Zhang et al. (2019) developed TRADES on the basis of AT to balance the standard accuracy and robust performance. Recently, there are several methods under this paradigm are developed to improve the model robustness (Wang et al., 2019; Alayrac et al., 2019; Carmon et al., 2019; Zhang et al., 2020; Jiang et al., 2020; Ding et al., 2020; Du et al., 2021; Zhang et al., 2021). However, directly learning from the adversarial examples is a challenging task on the complex datasets since the loss with hard labels is difficult to be optimized (Liu et al., 2020), which limits us to achieve higher robust accuracy.
To mitigate this issue, one emerging direction is distilling robustness from the adversarially pretrained model intermediately, which has shown promise in the recent study (Zi et al., 2021; Shu et al., 2021). For example, Ilyas et al. (2019) used an adversarially pre-trained model to build a “robustified” dataset to learn a robust DNN. Fan et al. (2021); Salman et al. (2020) explored to boost the model
† Corresponding author (bhanml@comp.hkbu.edu.hk).
robustness through fine-tuning or transfer learning from adversarially pre-trained models. Goldblum et al. (2020)and Chen et al. (2021) investigated distilling the robustness from adversarially pre-trained models, termed as adversarial distillation for simplicity, where they encouraged student models to mimic the outputs (i.e., soft labels) of the adversarially pre-trained teachers.
However, one critical difference is: in the conventional distillation, the teacher model and the student model share the natural training data; while in the adversarial distillation, the adversarial training data of the student model and that of the teacher model are egocentric (respectively generated by themselves) and becoming more adversarial challenging during training. Given this distinction, are the soft labels acquired from the teacher model in adversarial distillation always reliable and informative guidance? To answer this question, we take a closer look at the process of adversarial distillation. As shown in Figure 1(a), we discover that along with the training, the teacher model progressively fails to give a correct prediction for the adversarial data queried by the student model. The reason could be that with the students being more adversarially robust and thus the adversarial data being harder, it is too demanding to require the teachers become always good at every adversarial data queried by the student model, as the teacher model has never seen these data in its pre-training. In contrast, for the conventional distillation, student models are expected to distill the “static” knowledge from the teacher model, since the soft labels for the natural data from the teacher model are always fixed.
The observation in Figure 1(a) raises the challenge: how to conduct reliable adversarial distillation with unreliable teachers? To solve this problem, we can categorize the training data according to the prediction on natural and student generated adversarial data into three cases. First, if the teacher model can correctly classify both natural and adversarial data, it is reliable; Second, if the teacher model can correctly classify the natural but not adversarial data, it should be partially trusted, and the student model is suggested to trust itself to enhance model robustness as the adversarial regularization (Zhang et al., 2019); Third, if the teacher model cannot correctly classify both natural and adversarial data, the student model is recommended to trust itself totally. According to this intuition, we propose an Introspective Adversarial Distillation (IAD) to effectively utilize the knowledge from an adversarially pre-trained teacher model. The framework of our proposed IAD can be seen in Figure 1(b). Briefly, the student model is encouraged to partially instead of fully trust the teacher model, and gradually trust itself more as being more adversarially robust. We conduct extensive experiments on the benchmark CIFAR-10/CIFAR-100 and the more challenging Tiny-ImageNet datasets to evaluate the efficiency of our IAD. The main contributions of our work can be summarized as follows.
1. We take a closer look at adversarial distillation under the teacher-student paradigm. Considering adversarial robustness, we discover that the guidance from the teacher model is progressively unreliable along with the adversarial training.
2. We construct the reliable guidance for adversarial distillation by flexibly utilizing the robust knowledge from the teacher model: (a) if a teacher is good at adversarial data, its soft labels can be fully trusted; (b) if a teacher is good at natural data but not adversarial data, its soft
labels should be partially trusted and the student also takes its own soft labels into account; (c) otherwise, the student only relies on its own soft labels.
3. We propose an Introspective Adversarial Distillation (IAD) to automatically realize the intuition of the previous reliable guidance during the adversarial distillation. The experimental results confirmed that our approach can improve adversarial robustness across a variety of training settings and evaluations, and also on the challenging (consider adversarial robustness) datasets (e.g., CIFAR-100 (Krizhevsky, 2009) and Tiny-ImageNet (Le & Yang, 2015)) or using large models (e.g., WideResNet (Zagoruyko & Komodakis, 2016)).
2 RELATED WORK
2.1 ADVERSARIAL TRAINING.
Adversarial examples (Goodfellow et al., 2015) motivate many defensive approaches developed in the last few years. Among them, adversarial training has been demonstrated as the most effective method to improve the robustness of DNNs (Cai et al., 2018; Wang et al., 2020; Jiang et al., 2020; Chen et al., 2021; Sriramanan et al., 2021). The formulation of the popular AT (Madry et al., 2018) and its variants can be summarized as the minimization of the following loss:
min fθ∈F
1
n n∑ i=1 `(fθ(x̃i), yi), (1)
where n is the number of training examples, x̃i is the adversarial example within the -ball (bounded by an Lp-norm) centered at natural example xi, yi is the associated label, fθ is the DNN with parameter θ and `(·) is the standard classification loss, e.g., the cross-entropy loss. Adversarial training leverages adversarial examples to smooth the small neighborhood, making the model prediction locally invariant. To generate the adversarial examples, AT employs a PGD method (Madry et al., 2018). Concretely, given a sample x(0) ∈ X and the step size β > 0, PGD recursively searches
x̃(t+1) = ΠB[x̃(0)] ( x̃(t) + β sign(∇x̃(t)`(fθ(x̃(t)), y)) ) , (2)
until a certain stopping criterion is satisfied. In Eq. equation 2, t ∈ N, ` is the loss function, x̃(t) is adversarial data at step t, y is the corresponding label for natural data, and ΠB [x̃0](·) is the projection function that projects the adversarial data back into the -ball centered at x̃(0).
2.2 KNOWLEDGE DISTILLATION
The idea of distillation from other models can be dated back to (Craven & Shavlik, 1996), and re-introduced by (Hinton et al., 2015) as knowledge distillation (KD). It has been widely studied in recent years (Yao et al., 2021) and works well in numerous applications like model compression and transfer learning. For adversarial defense, a few studies have explored obtaining adversarial robust models by distillation. Papernot et al. (2016) proposed defensive distillation which utilizes the soft labels produced by a standard pre-trained teacher model, while this method is proved to be not resistant to the C&W attacks (Carlini & Wagner, 2017); Goldblum et al. (2020) combined AT with KD to transfer robustness to student models, and they found that the distilled models can outperform adversarially pre-trained teacher models of identical architecture in terms of adversarial robustness; Chen et al. (2021) utilized distillation as a regularization for adversarial training, which employed robust and standard pre-trained teacher models to address the robust overfitting (Rice et al., 2020).
Nonetheless, all these related methods fully trust teacher models and do not consider that whether the guidance of the teacher model in distillation is reliable or not. In this paper, different from the previous studies, we find that the teacher model in adversarial distillation is not always trustworthy. Formally, adversarial distillation suggests to minimize Ex̃∈B[x] [`kl(S(x̃|τ)||T (x̃|τ))], where T (x̃|τ) is not a constant soft supervision along with the adversarial training and affected by the adversarial data generated by the dynamically evolved student network. Based on that, we propose reliable IAD to encourage student models to partially instead of fully trust teacher models, which effectively utilizes the knowledge from the adversarially pre-trained models.
3 A CLOSER LOOK AT ADVERSARIAL DISTILLATION
In Section 3.1, we discuss the unreliable issue of adversarial distillation, i.e., the guidance of the teacher model is progressively unreliable along with adversarial training. In Section 3.2, we partition the training examples into three parts and analyze them part by part. Specifically, we expect that the student model should partially instead of fully trust the teacher model and gradually trust itself more along with adversarial training.
3.1 FULLY TRUST: PROGRESSIVELY UNRELIABLE GUIDANCE
As aforementioned in the Introduction, previous methods (Goldblum et al., 2020; Chen et al., 2021) fully trust the teacher model when distilling robustness from adversarially pre-trained models. Taking Adversarial Robust Distillation (ARD) (Goldblum et al., 2020) as an example, we illustrate its procedure in the left part of Figure 1(b): the student model generates its adversarial data and then optimizes the prediction of them to mimic the output of the teacher model. However, although the teacher model is well optimized on the adversarial data queried by itself, we argue that it might not always be good at the more and more challenging adversarial data queried by the student model.
As shown in Figure 1(a), different from the ordinary distillation in which the teacher model has the consistent standard performance on the natural data, its robust accuracy on the student model’s adversarial data is decreasing during distillation. The guidance of the teacher model gradually fails to give the correct output on the adversarial data queried by the student model.
3.2 PARTIALLY TRUST: CONSTRUCTION OF RELIABLE GUIDANCE
The unreliable issue of the teacher model in adversarial distillation raises the challenge of how to conduct reliable adversarial distillation with unreliable teachers? Intuitively, this requires us to re-consider the guidance of adversarially pre-trained models along with the adversarial training. For simplicity, we use T (x) (T (x̃)) to represent the predicted label of the teacher model on the natural (adversarial) examples, and use y to represent the targeted label. We partition the adversarial samples into following parts as shown in the toy illustration (Figure 2(a)), and analyze them part by part.
1) T (x) = y ∩ T (x̃) = y: It can be seen in Figure 2(a) that this part of data whose adversarial variants like x′1 is the most trustworthy among the three parts, since the teacher model performs well on both natural and adversarial data. In this case, we could choose to trust the guidance of the teacher model on this part of the data. However, as shown in Figure 2(b), we find that the sample number of this part is decreasing along with the adversarial training. That is, what we can rely on from the teacher model in adversarial distillation is progressively reduced.
2) T (x) = y ∩ T (x̃) 6= y: In Figure 2(b), we also check the number change of the part of data whose adversarial variants like x′′1 . Corresponding to the previous category, the number of this kind of data is increasing during distillation. Since the teacher model’s outputs on the small neighborhood of the queried natural data are not always correct, its knowledge may not be robust and the guidance for the student model is not reliable. Think back to the reason for the decrease in the robust accuracy of the teacher model, the student model itself may also be trustworthy since it becomes gradually adversarial robust during distillation.
3) T (x) 6= y∩T (x̃) 6= y: As for the data which are like x2 in Figure 2(a), the guidance of the teacher model is totally unreliable since the predicted labels on the natural data are wrong. The student model may also trust itself to encourage the outputs to mimic that of their natural data rather than the wrong outputs from the teacher model. First, it removes the potential threat that the teacher’s guidance may be a kind of noisy labels for training. Second, as an adversarial regularization (Zhang et al., 2019), it can improve the model robustness through enhancing the stability of the model’s outputs on the natural and the corresponding adversarial data.
4) T (x) 6= y ∩ T (x̃) = y: Considering the generation process of the adversarial data, i.e., x̃∗ = arg maxx̃∈B (x) `(f(x̃), y), Once the original prediction is wrong, i.e.,, T (x) 6= y, the generation of x̃∗ only make the prediction worse. Thus, this group doesn’t exist.
To sum up, we suggest employing reliable guidance from the teacher model and encouraging the student model to trust itself more as the teacher model’s guidance being progressively unreliable and the student model gradually becoming more adversarially robust.
4 INTROSPECTIVE ADVERSARIAL DISTILLATION
Based on previous analysis about the adversarial distillation, we propose the Introspective Adversarial Distillation (IAD) to better utilize the guidance from the adversarially pre-trained model. Concretely, we have the following KD-style loss, but composite with teacher guidance and student introspection.
`IAD = O(ADi;α)︸ ︷︷ ︸ Label & Teacher Guidance +γ `KL(S(x̃|τ)||S(x|τ))︸ ︷︷ ︸ Student Introspection ), (3)
where O(ADi;α) is the previous adversarial distillation baseline, e.g., ARD (Goldblum et al., 2020) or AKD2 (Chen et al., 2021), weighted by the hyper-parameter α1, γ is a weight for the student introspection, S(·|τ) is a Softmax operator with the temperature τ ∈ (0,+∞), e.g., S(xk|τ) = exp(xk/τ)∑ k′ exp(xk′/τ)
, S(·) is the conventional Softmax with the temperature τ = 1, T (·|τ) is the tempered variant of the teacher output T (·), x̃ is the adversarial data generated from the natural data x, y is the hard label and `CE is the Cross Entropy loss and `KL is the KL-divergence loss. As for the annealing parameter α ∈ [0, 1] that is used to balance the effect of the teacher model in adversarial distillation, based on the analysis about the reliability of adversarial supervision in Section 3, we define it as,
α = (PT (x̃|y))β , (4)
where PT (·|y) is the prediction probability of the teacher model about the targeted label y and β is a hyperparameter to sharpen the prediction. The intuition behind IAD is to calibrate the guidance from the teacher model automatically based on the prediction of adversarial data. Our α naturally corresponds to the construction in Section 3.2, since the prediction probability of the teacher model for the adversarial data can well represent the categorical information. As for β, we have plot the specific values of α under its adjustment in the left of Figure 4.
Intuitively, the student model can trust the teacher model when α approaches 1, which means that the teacher model is good at both natural and adversarial data. However, when α approaches 0, it corresponds that the teacher model is good at natural but not adversarial data, or even not good at both, and thus the student model should take its self-introspection into account. In Figure 3, we check the reliability of the student model itself. According to the left panel of Figure 3, we can see that the student model is progressively robust to the adversarial data. And if we incorporate the student introspection into the adversarial distillation, the results in the middle of Figure 3 confirms its
1Note that, we do not use α when ADi is ARD. Please refer to the Appendix A.2 for the ablation study.
Algorithm 1 Introspective Adversarial Distillation (IAD) Input: student model S, teacher model T , training dataset D = {(xi, yi)}ni=1, learning rate η, number of epochs N , batch size m, number of batches M , temperature parameter τ , the annealing parameter on teacher model’s predicted probability α, adjustable parameter λ, λ1, λ2, λ3 and γ. Output: adversarially robust model Sr for epoch = 1, . . . , N do
for mini-batch = 1, . . . , M do Sample a mini-batch {(xi, yi)}mi=1 from D for i = 1, . . . , m (in parallel) do
Obtain adversarial data x̃i of xi by PGD based on Eq. equation 2. Compute α for each adversarial data based on Eq. equation 4.
end for IAD-I: θ ← θ − η∇θ {
λ`CE(S(x), y) + (1− λ) · τ2· (`KL(S(x̃|τ)||T (x|τ)) + γ`KL(S(x̃|τ)||S(x|τ))) } or
IAD-II: θ ← θ−η∇θ λ1`CE(S(x̃), y) + λ2 · τ2· (α · `KL(S(x̃|τ)||Tat(x̃|τ))+λ3τ2`KL(S(x̃|τ)||Tst(x̃|τ))+
γ`KL(S(x̃|τ)||S(x|τ))) end for
end for
potential benefits to improve the accuracy of the guidance. Moreover, as shown in the right panel of Figure 3, adding self-introspection results in better improvement in model robustness compared to only using the guidance of the teacher model. Therefore, `IAD automatically encourages the outputs of the student model to mimic more reliable guidance in adversarial distillation.
Algorithm 1 summarizes the implementation of Introspective Adversarial Distillation (IAD). Specifically, IAD first leverages PGD to generate the adversarial data for the student model. Secondly, IAD computes the outputs of the teacher model and the student model on the natural data. Then, IAD mimics the outputs of the student model with that of itself and the teacher model partially based on the probability of the teacher model on the adversarial data.
Warming-up period. During training, we add a warming-up period to activate the student model, where α (in Eq. equation 3) is hardcoded to 1. This is because the student itself is not trustworthy in the early stage (refer to the left panel of Figure 3). Through that, we expect the student model to first evolve into a relatively reliable learner and then conducts the procedure of IAD.
4.1 COMPARISON WITH RELATED METHODS
In this section, we discuss the difference between IAD and other related approaches in the perspective of the loss functions. Table 1 summarizes all of them.
As shown in Table 1, AT (Madry et al., 2018) utilizes the hard labels to supervise adversarial training; TRADES (Zhang et al., 2019) decomposes the loss function of AT into two terms, one for standard training and the other one for adversarial training with the soft supervision; Motivated by KD (Hinton et al., 2015), Goldblum et al. (2020) proposed ARD to conduct the adversarial distillation, which fully
trusts the outputs of the teacher model to learn the student model. As indicated by the experiments in Goldblum et al. (2020), a larger λ resulted in less robust student models. Thus, they generally set λ = 0 in their experiments; Chen et al. (2021) utilized distillation as a regularization to avoid the robust overfitting issue, which employed both the adversarially pre-trained teacher model and the standard pre-trained model. Thus, there are two KL-divergence loss and for simplicity, we term their method as AKD2; Regarding IAD, there are two types of implementations that are respectively based on ARD or AKD2. We term them as IAD-I and IAD-II, and their difference with previous ARD and AKD2 is an additional self-introspection term. Besides, we also apply α to downweight the dependency on the term `KL(S(x̃|τ)||Tat(x̃|τ)), which has been explained in previous sections.
5 EXPERIMENTS
We conduct extensive experiments to evaluate the effectiveness of IAD. In Section 5.1, we compare IAD with benchmark adversarial training methods (AT and TRADES) and some related methods which utilize adversarially pre-trained models via KD (ARD and AKD2) on CIFAR-10/CIFAR100 (Krizhevsky, 2009) datasets. In Section 5.2, we compare the previous methods with IAD on a more challenging dataset Tiny-ImageNet (Le & Yang, 2015). In Section 5.3, the ablation studies are conducted to analyze the effects of the hyper-parameter β and different warming-up periods for IAD.
Regarding the measures, we compute the natural accuracy on the natural test data and the robust accuracy on the adversarial test data generated by FGSM, PGD-20, and C&W attacks following (Wang et al., 2019) Moreover, we estimate the performance under AutoAttack (termed as AA).
5.1 EVALUATION ON CIFAR-10/CIFAR-100 DATASETS
Experiment Setup. In this part, we follow the setup (learning rate, optimizer, weight decay, momentum) of Goldblum et al. (2020) to implement the adversarial distillation experiments on the CIFAR-10/CIFAR-100 datasets. Specifically, we train ResNet-18 under different methods using SGD with 0.9 momentum for 200 epochs. The initial learning rate is 0.1 divided by 10 at Epoch 100 and Epoch 150 respectively, and the weight decay=0.0002. In the settings of adversarial training, we set the perturbation bound = 8/255, the PGD step size σ = 2/255, and PGD step numbers K = 10. In the settings of distillation, we use τ = 1 and use models pre-trained by AT and TRADES which have the best PGD-10 test accuracy as the teacher models for ARD, AKD2 and our IAD. For ARD, we set its hyper-parameter λ = 0 as recommend in Goldblum et al. (2020) for gaining better robustness. For AKD2, we set λ1 = 0.25, λ2 = 0.5 and λ3 = 0.25 as recommanded in Chen et al. (2021). For IAD-I and IAD-II, we respectively set the warming-up period as 60/80 and 40/80 epochs to train on
CIFAR-10/CIFAR-100. Regarding the computation of α, we set λ = 0, β = 0.1. For γ, we currently set γ = 1− α and more ablation study about its setting can be found in the Appendix A.3.
Results. We report the results in Table 2, where the results of AT and TARDES are listed in the first and fifth rows of Table 2, and the other methods use these models as the teacher models in distillation. On CIFAR-10 and CIFAR-100, we note that our IAD-I or IAD-II has obtained consistent improvements on adversarial robustness in terms of PGD-20, CW∞ and AA accuracy compared with the student models distilled by ARD or AKD2 and the adversarially pre-trained teacher models. Besides, IAD-II generally performs better than IAD-I when the teacher is trained by AT, which means AKD2 in this case could be a better starting point than ARD. However, when the teacher model is trained by TRAEDS, the advantage about the robustness of IAD-I over that of IAD-II is in reverse. Considering their distillation philosophy, i.e., `kl(S(x̃|τ)||T (x|τ))) of IAD-I and `kl(S(x̃|τ)||T (x̃|τ))) of IAD-II, it might be up to which of T (x|τ) and T (x̃|τ) is more informative in adversarial distillation from the diverse teachers. The natural accuracy of IAD sometimes is lower than others, but the performance drop is not very significant compared to IAD-II.
Experiment Setup. In this part, we evaluate these methods by the model with a larger capacity, i.e, WideResNet-34-10. The teacher network is trained by AT and TRADES, following the settings of (Zhang et al., 2021). We keep the most settings of baselines same as the previous experiment. For IAD-I and IAD-II, we set 5/10 epochs warming-up period on CIFAR-10/CIFAR-100.
Results. The results is summarized in Tables 3. Similarly, on CIFAR-10 and CIFAR-100, our method can also achieve better model robustness than ARD, AKD2 and the original teacher models. Moreover, our IAD methods do not sacrifice much standard performance compared with the original teacher models. Since AKD2 externally utilizes a standard pre-trained teacher model, IAD-II can achieve the consistently better natural performance than IAD-I. However, in terms of the robustness, IAD-I generally achieves the comparable or even better than IAD-II under both AT and TRADES.
5.2 EVALUATION ON Tiny-ImageNet DATASET
Experiment Setup. In this part, we evaluate these methods on a more challenging Tiny-ImageNet dataset. For these adversarially pre-trained models, we follow the settings of (Chen et al., 2021) to train AT and TRADES. To be specific, we train PreActive-ResNet-18 using SGD with 0.9 momentum
for 100 epochs. The initial learning rate is 0.1 divided by 10 at Epoch 50 and 80 respectively, and the weight decay=0.0005. For distillation baselines, we keep most settings the same as Section 5.1. For ARD and IAD-I, here we set its λ = 0.9 to deal with the complex task following (Goldblum et al., 2020). And for both IAD-I and IAD-II, we use λ = 0.1, β = 0.1 and 10 warming-up epochs.
Results. We report the results in Table 4. Overall, our IAD-I or IAD-II can still achieve better model robustness than other methods. Specifically, on Tiny-ImageNet, IAD-II can approximately improve both the natural accuracy and the robust accuracy compared to IAD-I and other baselines.
5.3 ABLATION STUDIES
To give a comprehensive understanding of our proposed IAD method, we have conducted a series of experiments (in Appendix A), including the ablation study on using different γ related to student introspection, different τ related to adversarial distillation and the loss terms of IAD-II as well as the comparison of the computational cost. In the following, we only study the effect of β and warming-up periods for our student introspection. More ablation study can be found in the Appendix.
Experiment Setup. To understand the effects of different β and different warming-up periods on CIFAR-10 dataset, we conduct the ablation study in this part. Specifically, we choose the ResNet-18 as the backbone model, and keep the experimental settings the same as Section 5.1. In the first experiments, we set no warming-up period and study the effect of using different β. Then, in the second experiments, we set β = 0.1 and investigate different warming-up periods.
Results. We report part of the results on IAD-I method in Figure 4. The complete results with other evaluation metrics as well as that on IAD-II method can be found in Appendix A.1. In Figure 4, we first visualize the values of the α using different β in the left panel, which shows the proportion of the teacher guidance and student introspection in adversarial distillation. The bigger the beta corresponds to a larger proportion of the student introspection. In the middle panel, we plot the natural and PGD-20 accuracy of the student models distilled by different β. We note that the PGD-20 accuracy is improved when the student model trusts itself more with the larger β value. However, the natural accuracy is decreasing along with the increasing of the β value. Similarly, we adjust the length of warming-up periods and check the natural and PGD-20 accuracy in the right panel of Figure 4. We find that setting the student model partially trust itself at the beginning of the training process leads to inadequate robustness improvements. An appropriate warming-up period at the early stage can improve the student model performance on the adversarial examples.
6 CONCLUSION
In this paper, we study distillation from adversarially pre-trained models. We take a closer look at adversarial distillation and discover that the guidance of teacher model is progressively unreliable by considering the robustness. Hence, we explore the construction of reliable guidance in adversarial distillation and propose a method for distillation from unreliable teacher models, i.e., Introspective Adversarial Distillation. Our methods encourages the student model partially instead of fully trust the guidance of the teacher model and gradually trust its self-introspection more to improve robustness.
7 ACKNOWLEDGEMENT
JNZ and BH were supported by the RGC Early Career Scheme No. 22200720, NSFC Young Scientists Fund No. 62006202 and HKBU CSD Departmental Incentive Grant. JCY and HXY was supported by NSFC No. U20A20222. JFZ was supported by JST, ACT-X Grant Number JPMJAX21AF. TLL was supported by Australian Research Council Projects DE-190101473 and DP-220102121. JLX was supported by RGC Grant C6030-18GF.
8 ETHICS STATEMENT
This paper does not raise any ethics concerns. This study does not involve any human subjects, practices to data set releases, potentially harmful insights, methodologies and applications, potential conflicts of interest and sponsorship, discrimination/bias/fairness concerns, privacy and security issues, legal compliance, and research integrity issues.
9 REPRODUCIBILITY STATEMENT
To ensure the reproducibility of experimental results, our code is available at https://github. com/ZFancy/IAD.
A COMPREHENSIVE EXPERIMENTS
A.1 COMPLETE RESULTS OF ABLATION STUDIES.
In this section, we report the complete results of our ablation studies in Tables 5 (about β) and 6 (about warming-up periods).
In Table 5, we can see that the natural and FGSM accuracy will decrease, and the robust accuracy (PGD-20, CW∞, AA) will increase with the rise of β. In Table 6, we adjust the length of warming-up periods. We can see that letting the student network partially trust itself at the beginning of the training process would result in inadequate robustness improvements. In summary, we can find that there is a trade-off in Table 5 to achieve both larger natural accuracy and larger robustness in adversarial distillation, which is also similar to the standard adversarial training. We can slightly sacrifice the robustness (adjust the β or adjust the warming-up periods) to acquire a better natural accuracy and FGSM.
A.2 FULLY TRUST OR PARTIALLY TRUST THE TEACHER GUIDANCE
In this section, we compare the method with fully trusted or partially trusted teacher guidance. Regarding the variant of AKD2 that replaces the second term in AKD2 by our ”partially trust“ KL term (but without the introspection term), we find the robust improvement in Table 7.
In the ”partially trust” variant of AKD2, we just down weight the part of teacher guidance which has wrong prediction results with the hard labels. The results show that fit this part ”unreliable” teacher guidance may improve the natural performance but consistently drop the robust accuracy.
In the following, we also conduct a range of experiments to compare IAD-I and its variant IAD-IDown that applys our downweighting on `kl(S(x̃|τ)||T (x|τ)) rather than a constant 1.0 like ARD. According to the results, we can see that IAD-I consistently achieves better robustness while sacrifices a little natural accuracy. Hence, we will choose the constant 1.0 on `kl(S(x̃|τ)||T (x|τ)) for ARD in the main part of our experiments on those benchmark datasets.
A.3 THE EFFECTS OF γ FOR STUDENT INTROSPECTION
In this section, we check the effects of the Student Introspection via adjusting the γ for our IAD methods. Here, we also use the constant γ in the experiments. We find that using the constant γ can further boost the performance on model robustness, but there is another problem that the natural accuracy will sacrifice a little bit more than our previous dynamical design, i.e., γ = 1− α.
According to the results in Table 9, we can hardly to find an optimal coefficient for the student introspection term to achieve both the best natural accuracy and the best robustness. However,
there is one trend that increasing the coefficient will gain more robustness with losing more natural accuracy. With the hyper-parameter γ, we may flexibly instantiated by some constants or some strategic schedules to pursue the robustness or the natural accuracy.
A.4 THE EFFECTS OF τ FOR ADVERSARIAL DISTILLATION
In this section, we check the effects of τ for adversarial distillation under the same training configurations and also for TRADES. We have listed the results of TRADES in Tabel 10, and the results of IAD-I and ARD in Table 11.
As for TRADES, we think it may not need the temperature τ which is especially designed for distillation, since the KL term in TRADES is aims to encourage the output of the model on adversarial data to being the similar with that on natural data. Intuitively, it encourage the output to be more stable which leads to better robust performance. As a result, based on the test accuracy in Table 10, τ = 1 maybe the best for TRADES to achieve a better robustness and enlarging (or decreasing) τ can disturb the original output logits and may result in lower robustness (i.e., τ = 0.1: PGD-20 43.26% ).
According to the comparison in Table 11, with the proper τ (e.g., τ ≥ 1), IAD-I can achieve the comparable natural accuracy and better robustness. Note that in the experiments of IAD-I, we keep the τ = 1 for the student introspection term according to the results of TRADES in Table 10.
A.5 COMPARISON WITH TRADES AND ARD
In this section, we compared the performance of TRADES, ARD and our IAD-I with different weights in their each loss terms. We summarized the results in Table 12.
According to the results, IAD-I can approximately achieve the better robust accuracy than ARD, and the better natural accuracy than TRADES with less drop on PGD-20 accuracy. With varying the
hyper-parameters on the loss terms, we also find that natural accuracy can reach to 85.94% at a loss of robustness to 48.58% and robustness can reach to 55.89% at a loss of natural accuracy to 74.95%. In summary, through adjust those weight, our IAD-I can flexibly make a good balance among them by considering both the natural accuracy and the robustness.
A.6 ABLATION STUDY ON THE LOSS TERMS OF IAD-II
In this section, we conduct the ablation study in Table 13 about the four loss terms in IAD-II which is based on AKD2.
Specifically, according to the results, we find `KL(S(x̃|τ)||Tst(x̃|τ)) can help model gain more natural accuracy, while `KL(S(x̃|τ)||Tat(x̃|τ)) can help model gain more robustness. AKD2 achieves a good balance between two aspects by combining above two terms, and IAD-II further boosts the model robustness by incorporating the self-introspection term with less drop in natural performance.
A.7 COMPUTATIONAL COST COMPARISON
In this section, we check and compare the computational cost of ARD, AKD2, IAD-I and IAD-II in terms of the averaged training time in each epoch, as well as the GPU Memory-Usage. The detailed results are summarized in Table 14.
According to the results, IAD-I and IAD-II correspondingly consume a bit more time and memory than ARD and AKD2 due to the additional self-introspection. Besides, an interesting phenomenon is that ARD has less terms than AKD2, but consumes more time and GPU memory. This is because ARD has to deal with both x and x̃, while AKD2 only needs to deal with x̃. | 1. What is the main contribution of the paper regarding knowledge distillation and adversarial training?
2. What are the strengths and weaknesses of the proposed approach compared to previous methods like ARD and TRADES?
3. How does the reviewer assess the reliability of soft labels provided by the teacher model in adversarial distillation?
4. Why does the proposed method, IDA, partially trust the teacher model's soft labels, and how does it differ from ARD in this aspect?
5. What is the intuition behind the "Student Introspection" KL loss term in IAD, and how does it relate to the trustworthiness of the teacher guidance?
6. Are there any suggestions for ablation studies or modifications to further improve the performance of AKD2+IDA? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a new knowledge distillation (KD) method for adversarial training. The authors first observed that the soft-labels provided by the teacher gradually becomes less and less reliable during the adversarial training of student model. Based on this observation, they propose to partially trust the soft labels provided by the adversarily pretrained teacher.
Review
Pros:
The paper is well written and easy to follow. The main claims and key observations are clearly stated.
The key observation is very inspiring: The (adversarially pretrained) teacher model's accuracy on adversarial images generated by the student model gradually drops. This provides solid foundation for the main claim of the paper: The student model shouldn't be fully convinced by the soft labels provided by the teacher model in adversarial distillation. This is something important that previous works overlooked.
AKD2+IDA does generally outperform previous methods, although the margin is not that significant. I suggest the authors to try the modification I mentioned in the second bullet in "Cons" if haven't already. It may enlarge the performance improvement based on my intuitions.
Cons:
As I mentioned before, I'm totally convinced with the intuition to partially trust the teacher model's soft label in adversarial distillation. Bellow are some minor concerns.
The observation in Figure 1 (a) and Figure 3 (left column) tells us that the teacher model's predictions on the adversarial images generated by the student model shouldn't be fully trusted. However, the previous method ARD uses the soft label generated by the teacher network on clean images. So, it is not proper to claim that ARD has issue because it fully trusted the teach model's unreliable soft labels. In fact, the soft labels used in ARD are generated by teacher on clean images and has been shown to be generally reliable by yourself in the blue curve in Figure 1 (a). With that said, ADK2 does use the soft-label on adversarial images. I think it is better to use ADK2 as the starting point to motivate your method, instead of from ARD as in Sec 3.1 of current version. An even more direct way to motivate IDA is to replace the second loss term (ie the "fully trust" KL term) in ADK2 to your "partially trust" KL term, while keeping other terms unchanged.
Most importantly, just like in ARD, IAD in Eq. (3) uses soft labels from teacher on clean images, instead of adversarial images. As I mentioned in bullet 1, the soft labels on clean images are generally trustworthy, as shown by yourself in the paper. So why does IAD partially trust it? It is never motivated, if I understand correctly.
To support your intuitions, the current design of IAD in Eq. (3) might not be the best choice. Specifically, the "Student Introspection" KL loss is weighted by (1-\alpha). This is not so intuitive for me. You have justified the "Teacher Guidance" is not always trustworthy so it is good to weight its loss term with \alpha. However, a trustworthy "Teacher Guidance" doesn't mean an untrustworthy "Student Introspection". So why do you down-weight "Student Introspection" KL loss when \alpha is large? In fact, the "Student Introspection" KL loss term is just an annealed version of the smoothness loss term in TRADES. As shown in TRADES paper, it regularizes the smoothness of the model which helps robustness. In my view, based on your key motivation to partially trust the teacher, the student introspection loss term should have a constant weight and instead of dynamically down-weighted as in Eq (3).
There are four loss terms in the best performing method AKD2+IDA. It would be nice if ablation studies can be provided on all those loss terms.
Overall, I think the key observations in this paper is very inspiring. If proper modifications can be made to address the above issues, I think it would be a good paper. |
ICLR | Title
Reliable Adversarial Distillation with Unreliable Teachers
Abstract
In ordinary distillation, student networks are trained with soft labels (SLs) given by pretrained teacher networks, and students are expected to improve upon teachers since SLs are stronger supervision than the original hard labels. However, when considering adversarial robustness, teachers may become unreliable and adversarial distillation may not work: teachers are pretrained on their own adversarial data, and it is too demanding to require that teachers are also good at every adversarial data queried by students. Therefore, in this paper, we propose reliable introspective adversarial distillation (IAD) where students partially instead of fully trust their teachers. Specifically, IAD distinguishes between three cases given a query of a natural data (ND) and the corresponding adversarial data (AD): (a) if a teacher is good at AD, its SL is fully trusted; (b) if a teacher is good at ND but not AD, its SL is partially trusted and the student also takes its own SL into account; (c) otherwise, the student only relies on its own SL. Experiments demonstrate the effectiveness of IAD for improving upon teachers in terms of adversarial robustness.
1 INTRODUCTION
Deep Neural Networks (DNNs) have shown excellent performance on a range of tasks in computer vision (He et al., 2016) and natural language processing (Devlin et al., 2019). Nevertheless, Szegedy et al. (2014); Goodfellow et al. (2015) demonstrated that DNNs could be easily fooled by adding a small number of perturbations on natural examples, which increases the concerns on the robustness of DNNs in the trustworthy-sensitive areas, e.g., finance (Kumar et al., 2020) and autonomous driving (Litman, 2017). To overcome this problem, adversarial training (Goodfellow et al., 2015; Madry et al., 2018) is proposed and has shown effectiveness to acquire the adversarially robust DNNs.
Most existing adversarial training approaches focus on learning from data directly. For example, the popular adversarial training (AT) (Madry et al., 2018) leverages multi-step projected gradient descent (PGD) to generate the adversarial examples and feed them into the standard training. Zhang et al. (2019) developed TRADES on the basis of AT to balance the standard accuracy and robust performance. Recently, there are several methods under this paradigm are developed to improve the model robustness (Wang et al., 2019; Alayrac et al., 2019; Carmon et al., 2019; Zhang et al., 2020; Jiang et al., 2020; Ding et al., 2020; Du et al., 2021; Zhang et al., 2021). However, directly learning from the adversarial examples is a challenging task on the complex datasets since the loss with hard labels is difficult to be optimized (Liu et al., 2020), which limits us to achieve higher robust accuracy.
To mitigate this issue, one emerging direction is distilling robustness from the adversarially pretrained model intermediately, which has shown promise in the recent study (Zi et al., 2021; Shu et al., 2021). For example, Ilyas et al. (2019) used an adversarially pre-trained model to build a “robustified” dataset to learn a robust DNN. Fan et al. (2021); Salman et al. (2020) explored to boost the model
† Corresponding author (bhanml@comp.hkbu.edu.hk).
robustness through fine-tuning or transfer learning from adversarially pre-trained models. Goldblum et al. (2020)and Chen et al. (2021) investigated distilling the robustness from adversarially pre-trained models, termed as adversarial distillation for simplicity, where they encouraged student models to mimic the outputs (i.e., soft labels) of the adversarially pre-trained teachers.
However, one critical difference is: in the conventional distillation, the teacher model and the student model share the natural training data; while in the adversarial distillation, the adversarial training data of the student model and that of the teacher model are egocentric (respectively generated by themselves) and becoming more adversarial challenging during training. Given this distinction, are the soft labels acquired from the teacher model in adversarial distillation always reliable and informative guidance? To answer this question, we take a closer look at the process of adversarial distillation. As shown in Figure 1(a), we discover that along with the training, the teacher model progressively fails to give a correct prediction for the adversarial data queried by the student model. The reason could be that with the students being more adversarially robust and thus the adversarial data being harder, it is too demanding to require the teachers become always good at every adversarial data queried by the student model, as the teacher model has never seen these data in its pre-training. In contrast, for the conventional distillation, student models are expected to distill the “static” knowledge from the teacher model, since the soft labels for the natural data from the teacher model are always fixed.
The observation in Figure 1(a) raises the challenge: how to conduct reliable adversarial distillation with unreliable teachers? To solve this problem, we can categorize the training data according to the prediction on natural and student generated adversarial data into three cases. First, if the teacher model can correctly classify both natural and adversarial data, it is reliable; Second, if the teacher model can correctly classify the natural but not adversarial data, it should be partially trusted, and the student model is suggested to trust itself to enhance model robustness as the adversarial regularization (Zhang et al., 2019); Third, if the teacher model cannot correctly classify both natural and adversarial data, the student model is recommended to trust itself totally. According to this intuition, we propose an Introspective Adversarial Distillation (IAD) to effectively utilize the knowledge from an adversarially pre-trained teacher model. The framework of our proposed IAD can be seen in Figure 1(b). Briefly, the student model is encouraged to partially instead of fully trust the teacher model, and gradually trust itself more as being more adversarially robust. We conduct extensive experiments on the benchmark CIFAR-10/CIFAR-100 and the more challenging Tiny-ImageNet datasets to evaluate the efficiency of our IAD. The main contributions of our work can be summarized as follows.
1. We take a closer look at adversarial distillation under the teacher-student paradigm. Considering adversarial robustness, we discover that the guidance from the teacher model is progressively unreliable along with the adversarial training.
2. We construct the reliable guidance for adversarial distillation by flexibly utilizing the robust knowledge from the teacher model: (a) if a teacher is good at adversarial data, its soft labels can be fully trusted; (b) if a teacher is good at natural data but not adversarial data, its soft
labels should be partially trusted and the student also takes its own soft labels into account; (c) otherwise, the student only relies on its own soft labels.
3. We propose an Introspective Adversarial Distillation (IAD) to automatically realize the intuition of the previous reliable guidance during the adversarial distillation. The experimental results confirmed that our approach can improve adversarial robustness across a variety of training settings and evaluations, and also on the challenging (consider adversarial robustness) datasets (e.g., CIFAR-100 (Krizhevsky, 2009) and Tiny-ImageNet (Le & Yang, 2015)) or using large models (e.g., WideResNet (Zagoruyko & Komodakis, 2016)).
2 RELATED WORK
2.1 ADVERSARIAL TRAINING.
Adversarial examples (Goodfellow et al., 2015) motivate many defensive approaches developed in the last few years. Among them, adversarial training has been demonstrated as the most effective method to improve the robustness of DNNs (Cai et al., 2018; Wang et al., 2020; Jiang et al., 2020; Chen et al., 2021; Sriramanan et al., 2021). The formulation of the popular AT (Madry et al., 2018) and its variants can be summarized as the minimization of the following loss:
min fθ∈F
1
n n∑ i=1 `(fθ(x̃i), yi), (1)
where n is the number of training examples, x̃i is the adversarial example within the -ball (bounded by an Lp-norm) centered at natural example xi, yi is the associated label, fθ is the DNN with parameter θ and `(·) is the standard classification loss, e.g., the cross-entropy loss. Adversarial training leverages adversarial examples to smooth the small neighborhood, making the model prediction locally invariant. To generate the adversarial examples, AT employs a PGD method (Madry et al., 2018). Concretely, given a sample x(0) ∈ X and the step size β > 0, PGD recursively searches
x̃(t+1) = ΠB[x̃(0)] ( x̃(t) + β sign(∇x̃(t)`(fθ(x̃(t)), y)) ) , (2)
until a certain stopping criterion is satisfied. In Eq. equation 2, t ∈ N, ` is the loss function, x̃(t) is adversarial data at step t, y is the corresponding label for natural data, and ΠB [x̃0](·) is the projection function that projects the adversarial data back into the -ball centered at x̃(0).
2.2 KNOWLEDGE DISTILLATION
The idea of distillation from other models can be dated back to (Craven & Shavlik, 1996), and re-introduced by (Hinton et al., 2015) as knowledge distillation (KD). It has been widely studied in recent years (Yao et al., 2021) and works well in numerous applications like model compression and transfer learning. For adversarial defense, a few studies have explored obtaining adversarial robust models by distillation. Papernot et al. (2016) proposed defensive distillation which utilizes the soft labels produced by a standard pre-trained teacher model, while this method is proved to be not resistant to the C&W attacks (Carlini & Wagner, 2017); Goldblum et al. (2020) combined AT with KD to transfer robustness to student models, and they found that the distilled models can outperform adversarially pre-trained teacher models of identical architecture in terms of adversarial robustness; Chen et al. (2021) utilized distillation as a regularization for adversarial training, which employed robust and standard pre-trained teacher models to address the robust overfitting (Rice et al., 2020).
Nonetheless, all these related methods fully trust teacher models and do not consider that whether the guidance of the teacher model in distillation is reliable or not. In this paper, different from the previous studies, we find that the teacher model in adversarial distillation is not always trustworthy. Formally, adversarial distillation suggests to minimize Ex̃∈B[x] [`kl(S(x̃|τ)||T (x̃|τ))], where T (x̃|τ) is not a constant soft supervision along with the adversarial training and affected by the adversarial data generated by the dynamically evolved student network. Based on that, we propose reliable IAD to encourage student models to partially instead of fully trust teacher models, which effectively utilizes the knowledge from the adversarially pre-trained models.
3 A CLOSER LOOK AT ADVERSARIAL DISTILLATION
In Section 3.1, we discuss the unreliable issue of adversarial distillation, i.e., the guidance of the teacher model is progressively unreliable along with adversarial training. In Section 3.2, we partition the training examples into three parts and analyze them part by part. Specifically, we expect that the student model should partially instead of fully trust the teacher model and gradually trust itself more along with adversarial training.
3.1 FULLY TRUST: PROGRESSIVELY UNRELIABLE GUIDANCE
As aforementioned in the Introduction, previous methods (Goldblum et al., 2020; Chen et al., 2021) fully trust the teacher model when distilling robustness from adversarially pre-trained models. Taking Adversarial Robust Distillation (ARD) (Goldblum et al., 2020) as an example, we illustrate its procedure in the left part of Figure 1(b): the student model generates its adversarial data and then optimizes the prediction of them to mimic the output of the teacher model. However, although the teacher model is well optimized on the adversarial data queried by itself, we argue that it might not always be good at the more and more challenging adversarial data queried by the student model.
As shown in Figure 1(a), different from the ordinary distillation in which the teacher model has the consistent standard performance on the natural data, its robust accuracy on the student model’s adversarial data is decreasing during distillation. The guidance of the teacher model gradually fails to give the correct output on the adversarial data queried by the student model.
3.2 PARTIALLY TRUST: CONSTRUCTION OF RELIABLE GUIDANCE
The unreliable issue of the teacher model in adversarial distillation raises the challenge of how to conduct reliable adversarial distillation with unreliable teachers? Intuitively, this requires us to re-consider the guidance of adversarially pre-trained models along with the adversarial training. For simplicity, we use T (x) (T (x̃)) to represent the predicted label of the teacher model on the natural (adversarial) examples, and use y to represent the targeted label. We partition the adversarial samples into following parts as shown in the toy illustration (Figure 2(a)), and analyze them part by part.
1) T (x) = y ∩ T (x̃) = y: It can be seen in Figure 2(a) that this part of data whose adversarial variants like x′1 is the most trustworthy among the three parts, since the teacher model performs well on both natural and adversarial data. In this case, we could choose to trust the guidance of the teacher model on this part of the data. However, as shown in Figure 2(b), we find that the sample number of this part is decreasing along with the adversarial training. That is, what we can rely on from the teacher model in adversarial distillation is progressively reduced.
2) T (x) = y ∩ T (x̃) 6= y: In Figure 2(b), we also check the number change of the part of data whose adversarial variants like x′′1 . Corresponding to the previous category, the number of this kind of data is increasing during distillation. Since the teacher model’s outputs on the small neighborhood of the queried natural data are not always correct, its knowledge may not be robust and the guidance for the student model is not reliable. Think back to the reason for the decrease in the robust accuracy of the teacher model, the student model itself may also be trustworthy since it becomes gradually adversarial robust during distillation.
3) T (x) 6= y∩T (x̃) 6= y: As for the data which are like x2 in Figure 2(a), the guidance of the teacher model is totally unreliable since the predicted labels on the natural data are wrong. The student model may also trust itself to encourage the outputs to mimic that of their natural data rather than the wrong outputs from the teacher model. First, it removes the potential threat that the teacher’s guidance may be a kind of noisy labels for training. Second, as an adversarial regularization (Zhang et al., 2019), it can improve the model robustness through enhancing the stability of the model’s outputs on the natural and the corresponding adversarial data.
4) T (x) 6= y ∩ T (x̃) = y: Considering the generation process of the adversarial data, i.e., x̃∗ = arg maxx̃∈B (x) `(f(x̃), y), Once the original prediction is wrong, i.e.,, T (x) 6= y, the generation of x̃∗ only make the prediction worse. Thus, this group doesn’t exist.
To sum up, we suggest employing reliable guidance from the teacher model and encouraging the student model to trust itself more as the teacher model’s guidance being progressively unreliable and the student model gradually becoming more adversarially robust.
4 INTROSPECTIVE ADVERSARIAL DISTILLATION
Based on previous analysis about the adversarial distillation, we propose the Introspective Adversarial Distillation (IAD) to better utilize the guidance from the adversarially pre-trained model. Concretely, we have the following KD-style loss, but composite with teacher guidance and student introspection.
`IAD = O(ADi;α)︸ ︷︷ ︸ Label & Teacher Guidance +γ `KL(S(x̃|τ)||S(x|τ))︸ ︷︷ ︸ Student Introspection ), (3)
where O(ADi;α) is the previous adversarial distillation baseline, e.g., ARD (Goldblum et al., 2020) or AKD2 (Chen et al., 2021), weighted by the hyper-parameter α1, γ is a weight for the student introspection, S(·|τ) is a Softmax operator with the temperature τ ∈ (0,+∞), e.g., S(xk|τ) = exp(xk/τ)∑ k′ exp(xk′/τ)
, S(·) is the conventional Softmax with the temperature τ = 1, T (·|τ) is the tempered variant of the teacher output T (·), x̃ is the adversarial data generated from the natural data x, y is the hard label and `CE is the Cross Entropy loss and `KL is the KL-divergence loss. As for the annealing parameter α ∈ [0, 1] that is used to balance the effect of the teacher model in adversarial distillation, based on the analysis about the reliability of adversarial supervision in Section 3, we define it as,
α = (PT (x̃|y))β , (4)
where PT (·|y) is the prediction probability of the teacher model about the targeted label y and β is a hyperparameter to sharpen the prediction. The intuition behind IAD is to calibrate the guidance from the teacher model automatically based on the prediction of adversarial data. Our α naturally corresponds to the construction in Section 3.2, since the prediction probability of the teacher model for the adversarial data can well represent the categorical information. As for β, we have plot the specific values of α under its adjustment in the left of Figure 4.
Intuitively, the student model can trust the teacher model when α approaches 1, which means that the teacher model is good at both natural and adversarial data. However, when α approaches 0, it corresponds that the teacher model is good at natural but not adversarial data, or even not good at both, and thus the student model should take its self-introspection into account. In Figure 3, we check the reliability of the student model itself. According to the left panel of Figure 3, we can see that the student model is progressively robust to the adversarial data. And if we incorporate the student introspection into the adversarial distillation, the results in the middle of Figure 3 confirms its
1Note that, we do not use α when ADi is ARD. Please refer to the Appendix A.2 for the ablation study.
Algorithm 1 Introspective Adversarial Distillation (IAD) Input: student model S, teacher model T , training dataset D = {(xi, yi)}ni=1, learning rate η, number of epochs N , batch size m, number of batches M , temperature parameter τ , the annealing parameter on teacher model’s predicted probability α, adjustable parameter λ, λ1, λ2, λ3 and γ. Output: adversarially robust model Sr for epoch = 1, . . . , N do
for mini-batch = 1, . . . , M do Sample a mini-batch {(xi, yi)}mi=1 from D for i = 1, . . . , m (in parallel) do
Obtain adversarial data x̃i of xi by PGD based on Eq. equation 2. Compute α for each adversarial data based on Eq. equation 4.
end for IAD-I: θ ← θ − η∇θ {
λ`CE(S(x), y) + (1− λ) · τ2· (`KL(S(x̃|τ)||T (x|τ)) + γ`KL(S(x̃|τ)||S(x|τ))) } or
IAD-II: θ ← θ−η∇θ λ1`CE(S(x̃), y) + λ2 · τ2· (α · `KL(S(x̃|τ)||Tat(x̃|τ))+λ3τ2`KL(S(x̃|τ)||Tst(x̃|τ))+
γ`KL(S(x̃|τ)||S(x|τ))) end for
end for
potential benefits to improve the accuracy of the guidance. Moreover, as shown in the right panel of Figure 3, adding self-introspection results in better improvement in model robustness compared to only using the guidance of the teacher model. Therefore, `IAD automatically encourages the outputs of the student model to mimic more reliable guidance in adversarial distillation.
Algorithm 1 summarizes the implementation of Introspective Adversarial Distillation (IAD). Specifically, IAD first leverages PGD to generate the adversarial data for the student model. Secondly, IAD computes the outputs of the teacher model and the student model on the natural data. Then, IAD mimics the outputs of the student model with that of itself and the teacher model partially based on the probability of the teacher model on the adversarial data.
Warming-up period. During training, we add a warming-up period to activate the student model, where α (in Eq. equation 3) is hardcoded to 1. This is because the student itself is not trustworthy in the early stage (refer to the left panel of Figure 3). Through that, we expect the student model to first evolve into a relatively reliable learner and then conducts the procedure of IAD.
4.1 COMPARISON WITH RELATED METHODS
In this section, we discuss the difference between IAD and other related approaches in the perspective of the loss functions. Table 1 summarizes all of them.
As shown in Table 1, AT (Madry et al., 2018) utilizes the hard labels to supervise adversarial training; TRADES (Zhang et al., 2019) decomposes the loss function of AT into two terms, one for standard training and the other one for adversarial training with the soft supervision; Motivated by KD (Hinton et al., 2015), Goldblum et al. (2020) proposed ARD to conduct the adversarial distillation, which fully
trusts the outputs of the teacher model to learn the student model. As indicated by the experiments in Goldblum et al. (2020), a larger λ resulted in less robust student models. Thus, they generally set λ = 0 in their experiments; Chen et al. (2021) utilized distillation as a regularization to avoid the robust overfitting issue, which employed both the adversarially pre-trained teacher model and the standard pre-trained model. Thus, there are two KL-divergence loss and for simplicity, we term their method as AKD2; Regarding IAD, there are two types of implementations that are respectively based on ARD or AKD2. We term them as IAD-I and IAD-II, and their difference with previous ARD and AKD2 is an additional self-introspection term. Besides, we also apply α to downweight the dependency on the term `KL(S(x̃|τ)||Tat(x̃|τ)), which has been explained in previous sections.
5 EXPERIMENTS
We conduct extensive experiments to evaluate the effectiveness of IAD. In Section 5.1, we compare IAD with benchmark adversarial training methods (AT and TRADES) and some related methods which utilize adversarially pre-trained models via KD (ARD and AKD2) on CIFAR-10/CIFAR100 (Krizhevsky, 2009) datasets. In Section 5.2, we compare the previous methods with IAD on a more challenging dataset Tiny-ImageNet (Le & Yang, 2015). In Section 5.3, the ablation studies are conducted to analyze the effects of the hyper-parameter β and different warming-up periods for IAD.
Regarding the measures, we compute the natural accuracy on the natural test data and the robust accuracy on the adversarial test data generated by FGSM, PGD-20, and C&W attacks following (Wang et al., 2019) Moreover, we estimate the performance under AutoAttack (termed as AA).
5.1 EVALUATION ON CIFAR-10/CIFAR-100 DATASETS
Experiment Setup. In this part, we follow the setup (learning rate, optimizer, weight decay, momentum) of Goldblum et al. (2020) to implement the adversarial distillation experiments on the CIFAR-10/CIFAR-100 datasets. Specifically, we train ResNet-18 under different methods using SGD with 0.9 momentum for 200 epochs. The initial learning rate is 0.1 divided by 10 at Epoch 100 and Epoch 150 respectively, and the weight decay=0.0002. In the settings of adversarial training, we set the perturbation bound = 8/255, the PGD step size σ = 2/255, and PGD step numbers K = 10. In the settings of distillation, we use τ = 1 and use models pre-trained by AT and TRADES which have the best PGD-10 test accuracy as the teacher models for ARD, AKD2 and our IAD. For ARD, we set its hyper-parameter λ = 0 as recommend in Goldblum et al. (2020) for gaining better robustness. For AKD2, we set λ1 = 0.25, λ2 = 0.5 and λ3 = 0.25 as recommanded in Chen et al. (2021). For IAD-I and IAD-II, we respectively set the warming-up period as 60/80 and 40/80 epochs to train on
CIFAR-10/CIFAR-100. Regarding the computation of α, we set λ = 0, β = 0.1. For γ, we currently set γ = 1− α and more ablation study about its setting can be found in the Appendix A.3.
Results. We report the results in Table 2, where the results of AT and TARDES are listed in the first and fifth rows of Table 2, and the other methods use these models as the teacher models in distillation. On CIFAR-10 and CIFAR-100, we note that our IAD-I or IAD-II has obtained consistent improvements on adversarial robustness in terms of PGD-20, CW∞ and AA accuracy compared with the student models distilled by ARD or AKD2 and the adversarially pre-trained teacher models. Besides, IAD-II generally performs better than IAD-I when the teacher is trained by AT, which means AKD2 in this case could be a better starting point than ARD. However, when the teacher model is trained by TRAEDS, the advantage about the robustness of IAD-I over that of IAD-II is in reverse. Considering their distillation philosophy, i.e., `kl(S(x̃|τ)||T (x|τ))) of IAD-I and `kl(S(x̃|τ)||T (x̃|τ))) of IAD-II, it might be up to which of T (x|τ) and T (x̃|τ) is more informative in adversarial distillation from the diverse teachers. The natural accuracy of IAD sometimes is lower than others, but the performance drop is not very significant compared to IAD-II.
Experiment Setup. In this part, we evaluate these methods by the model with a larger capacity, i.e, WideResNet-34-10. The teacher network is trained by AT and TRADES, following the settings of (Zhang et al., 2021). We keep the most settings of baselines same as the previous experiment. For IAD-I and IAD-II, we set 5/10 epochs warming-up period on CIFAR-10/CIFAR-100.
Results. The results is summarized in Tables 3. Similarly, on CIFAR-10 and CIFAR-100, our method can also achieve better model robustness than ARD, AKD2 and the original teacher models. Moreover, our IAD methods do not sacrifice much standard performance compared with the original teacher models. Since AKD2 externally utilizes a standard pre-trained teacher model, IAD-II can achieve the consistently better natural performance than IAD-I. However, in terms of the robustness, IAD-I generally achieves the comparable or even better than IAD-II under both AT and TRADES.
5.2 EVALUATION ON Tiny-ImageNet DATASET
Experiment Setup. In this part, we evaluate these methods on a more challenging Tiny-ImageNet dataset. For these adversarially pre-trained models, we follow the settings of (Chen et al., 2021) to train AT and TRADES. To be specific, we train PreActive-ResNet-18 using SGD with 0.9 momentum
for 100 epochs. The initial learning rate is 0.1 divided by 10 at Epoch 50 and 80 respectively, and the weight decay=0.0005. For distillation baselines, we keep most settings the same as Section 5.1. For ARD and IAD-I, here we set its λ = 0.9 to deal with the complex task following (Goldblum et al., 2020). And for both IAD-I and IAD-II, we use λ = 0.1, β = 0.1 and 10 warming-up epochs.
Results. We report the results in Table 4. Overall, our IAD-I or IAD-II can still achieve better model robustness than other methods. Specifically, on Tiny-ImageNet, IAD-II can approximately improve both the natural accuracy and the robust accuracy compared to IAD-I and other baselines.
5.3 ABLATION STUDIES
To give a comprehensive understanding of our proposed IAD method, we have conducted a series of experiments (in Appendix A), including the ablation study on using different γ related to student introspection, different τ related to adversarial distillation and the loss terms of IAD-II as well as the comparison of the computational cost. In the following, we only study the effect of β and warming-up periods for our student introspection. More ablation study can be found in the Appendix.
Experiment Setup. To understand the effects of different β and different warming-up periods on CIFAR-10 dataset, we conduct the ablation study in this part. Specifically, we choose the ResNet-18 as the backbone model, and keep the experimental settings the same as Section 5.1. In the first experiments, we set no warming-up period and study the effect of using different β. Then, in the second experiments, we set β = 0.1 and investigate different warming-up periods.
Results. We report part of the results on IAD-I method in Figure 4. The complete results with other evaluation metrics as well as that on IAD-II method can be found in Appendix A.1. In Figure 4, we first visualize the values of the α using different β in the left panel, which shows the proportion of the teacher guidance and student introspection in adversarial distillation. The bigger the beta corresponds to a larger proportion of the student introspection. In the middle panel, we plot the natural and PGD-20 accuracy of the student models distilled by different β. We note that the PGD-20 accuracy is improved when the student model trusts itself more with the larger β value. However, the natural accuracy is decreasing along with the increasing of the β value. Similarly, we adjust the length of warming-up periods and check the natural and PGD-20 accuracy in the right panel of Figure 4. We find that setting the student model partially trust itself at the beginning of the training process leads to inadequate robustness improvements. An appropriate warming-up period at the early stage can improve the student model performance on the adversarial examples.
6 CONCLUSION
In this paper, we study distillation from adversarially pre-trained models. We take a closer look at adversarial distillation and discover that the guidance of teacher model is progressively unreliable by considering the robustness. Hence, we explore the construction of reliable guidance in adversarial distillation and propose a method for distillation from unreliable teacher models, i.e., Introspective Adversarial Distillation. Our methods encourages the student model partially instead of fully trust the guidance of the teacher model and gradually trust its self-introspection more to improve robustness.
7 ACKNOWLEDGEMENT
JNZ and BH were supported by the RGC Early Career Scheme No. 22200720, NSFC Young Scientists Fund No. 62006202 and HKBU CSD Departmental Incentive Grant. JCY and HXY was supported by NSFC No. U20A20222. JFZ was supported by JST, ACT-X Grant Number JPMJAX21AF. TLL was supported by Australian Research Council Projects DE-190101473 and DP-220102121. JLX was supported by RGC Grant C6030-18GF.
8 ETHICS STATEMENT
This paper does not raise any ethics concerns. This study does not involve any human subjects, practices to data set releases, potentially harmful insights, methodologies and applications, potential conflicts of interest and sponsorship, discrimination/bias/fairness concerns, privacy and security issues, legal compliance, and research integrity issues.
9 REPRODUCIBILITY STATEMENT
To ensure the reproducibility of experimental results, our code is available at https://github. com/ZFancy/IAD.
A COMPREHENSIVE EXPERIMENTS
A.1 COMPLETE RESULTS OF ABLATION STUDIES.
In this section, we report the complete results of our ablation studies in Tables 5 (about β) and 6 (about warming-up periods).
In Table 5, we can see that the natural and FGSM accuracy will decrease, and the robust accuracy (PGD-20, CW∞, AA) will increase with the rise of β. In Table 6, we adjust the length of warming-up periods. We can see that letting the student network partially trust itself at the beginning of the training process would result in inadequate robustness improvements. In summary, we can find that there is a trade-off in Table 5 to achieve both larger natural accuracy and larger robustness in adversarial distillation, which is also similar to the standard adversarial training. We can slightly sacrifice the robustness (adjust the β or adjust the warming-up periods) to acquire a better natural accuracy and FGSM.
A.2 FULLY TRUST OR PARTIALLY TRUST THE TEACHER GUIDANCE
In this section, we compare the method with fully trusted or partially trusted teacher guidance. Regarding the variant of AKD2 that replaces the second term in AKD2 by our ”partially trust“ KL term (but without the introspection term), we find the robust improvement in Table 7.
In the ”partially trust” variant of AKD2, we just down weight the part of teacher guidance which has wrong prediction results with the hard labels. The results show that fit this part ”unreliable” teacher guidance may improve the natural performance but consistently drop the robust accuracy.
In the following, we also conduct a range of experiments to compare IAD-I and its variant IAD-IDown that applys our downweighting on `kl(S(x̃|τ)||T (x|τ)) rather than a constant 1.0 like ARD. According to the results, we can see that IAD-I consistently achieves better robustness while sacrifices a little natural accuracy. Hence, we will choose the constant 1.0 on `kl(S(x̃|τ)||T (x|τ)) for ARD in the main part of our experiments on those benchmark datasets.
A.3 THE EFFECTS OF γ FOR STUDENT INTROSPECTION
In this section, we check the effects of the Student Introspection via adjusting the γ for our IAD methods. Here, we also use the constant γ in the experiments. We find that using the constant γ can further boost the performance on model robustness, but there is another problem that the natural accuracy will sacrifice a little bit more than our previous dynamical design, i.e., γ = 1− α.
According to the results in Table 9, we can hardly to find an optimal coefficient for the student introspection term to achieve both the best natural accuracy and the best robustness. However,
there is one trend that increasing the coefficient will gain more robustness with losing more natural accuracy. With the hyper-parameter γ, we may flexibly instantiated by some constants or some strategic schedules to pursue the robustness or the natural accuracy.
A.4 THE EFFECTS OF τ FOR ADVERSARIAL DISTILLATION
In this section, we check the effects of τ for adversarial distillation under the same training configurations and also for TRADES. We have listed the results of TRADES in Tabel 10, and the results of IAD-I and ARD in Table 11.
As for TRADES, we think it may not need the temperature τ which is especially designed for distillation, since the KL term in TRADES is aims to encourage the output of the model on adversarial data to being the similar with that on natural data. Intuitively, it encourage the output to be more stable which leads to better robust performance. As a result, based on the test accuracy in Table 10, τ = 1 maybe the best for TRADES to achieve a better robustness and enlarging (or decreasing) τ can disturb the original output logits and may result in lower robustness (i.e., τ = 0.1: PGD-20 43.26% ).
According to the comparison in Table 11, with the proper τ (e.g., τ ≥ 1), IAD-I can achieve the comparable natural accuracy and better robustness. Note that in the experiments of IAD-I, we keep the τ = 1 for the student introspection term according to the results of TRADES in Table 10.
A.5 COMPARISON WITH TRADES AND ARD
In this section, we compared the performance of TRADES, ARD and our IAD-I with different weights in their each loss terms. We summarized the results in Table 12.
According to the results, IAD-I can approximately achieve the better robust accuracy than ARD, and the better natural accuracy than TRADES with less drop on PGD-20 accuracy. With varying the
hyper-parameters on the loss terms, we also find that natural accuracy can reach to 85.94% at a loss of robustness to 48.58% and robustness can reach to 55.89% at a loss of natural accuracy to 74.95%. In summary, through adjust those weight, our IAD-I can flexibly make a good balance among them by considering both the natural accuracy and the robustness.
A.6 ABLATION STUDY ON THE LOSS TERMS OF IAD-II
In this section, we conduct the ablation study in Table 13 about the four loss terms in IAD-II which is based on AKD2.
Specifically, according to the results, we find `KL(S(x̃|τ)||Tst(x̃|τ)) can help model gain more natural accuracy, while `KL(S(x̃|τ)||Tat(x̃|τ)) can help model gain more robustness. AKD2 achieves a good balance between two aspects by combining above two terms, and IAD-II further boosts the model robustness by incorporating the self-introspection term with less drop in natural performance.
A.7 COMPUTATIONAL COST COMPARISON
In this section, we check and compare the computational cost of ARD, AKD2, IAD-I and IAD-II in terms of the averaged training time in each epoch, as well as the GPU Memory-Usage. The detailed results are summarized in Table 14.
According to the results, IAD-I and IAD-II correspondingly consume a bit more time and memory than ARD and AKD2 due to the additional self-introspection. Besides, an interesting phenomenon is that ARD has less terms than AKD2, but consumes more time and GPU memory. This is because ARD has to deal with both x and x̃, while AKD2 only needs to deal with x̃. | 1. What is the focus and contribution of the paper on adversarial training?
2. What are the strengths of the proposed approach, particularly in its simplicity and practicality?
3. What are the weaknesses of the paper, especially regarding the experimental results and the tradeoff between natural performance and robustness?
4. Do you have any concerns about the applicability of the method to language datasets?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
In this paper, a new method called introspective adversarial distillation (IAD) is proposed for conventional adversarial training. Concretely, it targets the unreliable teacher case, where teacher is good at adversarial / natural data or none of it. Experimental results validate the effectiveness of the proposed method towards enhancing adversarial robustness.
Review
Pros.
The overall idea is interesting and the proposed method seems simple and practical as well since we only need to adjust the annealing parameter adaptively. Besides, the paper is quite readable and experimental results seem promising.
Cons.
There are some issues to be addressed.
The results of Natural and FGSM are not good. The tradeoff between Natural performance and robustness should be investigated more.
In tiny-imagenet and cifar datasets, why IAD + AKD^2 is better than IAD for AA? This phenomenon should be explained more.
The current evaluation is based on image datasets, like cifar and tiny-imagenet. However, is IAD still working for language datasets?
It is good to discuss some theoretical justification for IAD, which will make IAD stronger in theory. |
ICLR | Title
Reliable Adversarial Distillation with Unreliable Teachers
Abstract
In ordinary distillation, student networks are trained with soft labels (SLs) given by pretrained teacher networks, and students are expected to improve upon teachers since SLs are stronger supervision than the original hard labels. However, when considering adversarial robustness, teachers may become unreliable and adversarial distillation may not work: teachers are pretrained on their own adversarial data, and it is too demanding to require that teachers are also good at every adversarial data queried by students. Therefore, in this paper, we propose reliable introspective adversarial distillation (IAD) where students partially instead of fully trust their teachers. Specifically, IAD distinguishes between three cases given a query of a natural data (ND) and the corresponding adversarial data (AD): (a) if a teacher is good at AD, its SL is fully trusted; (b) if a teacher is good at ND but not AD, its SL is partially trusted and the student also takes its own SL into account; (c) otherwise, the student only relies on its own SL. Experiments demonstrate the effectiveness of IAD for improving upon teachers in terms of adversarial robustness.
1 INTRODUCTION
Deep Neural Networks (DNNs) have shown excellent performance on a range of tasks in computer vision (He et al., 2016) and natural language processing (Devlin et al., 2019). Nevertheless, Szegedy et al. (2014); Goodfellow et al. (2015) demonstrated that DNNs could be easily fooled by adding a small number of perturbations on natural examples, which increases the concerns on the robustness of DNNs in the trustworthy-sensitive areas, e.g., finance (Kumar et al., 2020) and autonomous driving (Litman, 2017). To overcome this problem, adversarial training (Goodfellow et al., 2015; Madry et al., 2018) is proposed and has shown effectiveness to acquire the adversarially robust DNNs.
Most existing adversarial training approaches focus on learning from data directly. For example, the popular adversarial training (AT) (Madry et al., 2018) leverages multi-step projected gradient descent (PGD) to generate the adversarial examples and feed them into the standard training. Zhang et al. (2019) developed TRADES on the basis of AT to balance the standard accuracy and robust performance. Recently, there are several methods under this paradigm are developed to improve the model robustness (Wang et al., 2019; Alayrac et al., 2019; Carmon et al., 2019; Zhang et al., 2020; Jiang et al., 2020; Ding et al., 2020; Du et al., 2021; Zhang et al., 2021). However, directly learning from the adversarial examples is a challenging task on the complex datasets since the loss with hard labels is difficult to be optimized (Liu et al., 2020), which limits us to achieve higher robust accuracy.
To mitigate this issue, one emerging direction is distilling robustness from the adversarially pretrained model intermediately, which has shown promise in the recent study (Zi et al., 2021; Shu et al., 2021). For example, Ilyas et al. (2019) used an adversarially pre-trained model to build a “robustified” dataset to learn a robust DNN. Fan et al. (2021); Salman et al. (2020) explored to boost the model
† Corresponding author (bhanml@comp.hkbu.edu.hk).
robustness through fine-tuning or transfer learning from adversarially pre-trained models. Goldblum et al. (2020)and Chen et al. (2021) investigated distilling the robustness from adversarially pre-trained models, termed as adversarial distillation for simplicity, where they encouraged student models to mimic the outputs (i.e., soft labels) of the adversarially pre-trained teachers.
However, one critical difference is: in the conventional distillation, the teacher model and the student model share the natural training data; while in the adversarial distillation, the adversarial training data of the student model and that of the teacher model are egocentric (respectively generated by themselves) and becoming more adversarial challenging during training. Given this distinction, are the soft labels acquired from the teacher model in adversarial distillation always reliable and informative guidance? To answer this question, we take a closer look at the process of adversarial distillation. As shown in Figure 1(a), we discover that along with the training, the teacher model progressively fails to give a correct prediction for the adversarial data queried by the student model. The reason could be that with the students being more adversarially robust and thus the adversarial data being harder, it is too demanding to require the teachers become always good at every adversarial data queried by the student model, as the teacher model has never seen these data in its pre-training. In contrast, for the conventional distillation, student models are expected to distill the “static” knowledge from the teacher model, since the soft labels for the natural data from the teacher model are always fixed.
The observation in Figure 1(a) raises the challenge: how to conduct reliable adversarial distillation with unreliable teachers? To solve this problem, we can categorize the training data according to the prediction on natural and student generated adversarial data into three cases. First, if the teacher model can correctly classify both natural and adversarial data, it is reliable; Second, if the teacher model can correctly classify the natural but not adversarial data, it should be partially trusted, and the student model is suggested to trust itself to enhance model robustness as the adversarial regularization (Zhang et al., 2019); Third, if the teacher model cannot correctly classify both natural and adversarial data, the student model is recommended to trust itself totally. According to this intuition, we propose an Introspective Adversarial Distillation (IAD) to effectively utilize the knowledge from an adversarially pre-trained teacher model. The framework of our proposed IAD can be seen in Figure 1(b). Briefly, the student model is encouraged to partially instead of fully trust the teacher model, and gradually trust itself more as being more adversarially robust. We conduct extensive experiments on the benchmark CIFAR-10/CIFAR-100 and the more challenging Tiny-ImageNet datasets to evaluate the efficiency of our IAD. The main contributions of our work can be summarized as follows.
1. We take a closer look at adversarial distillation under the teacher-student paradigm. Considering adversarial robustness, we discover that the guidance from the teacher model is progressively unreliable along with the adversarial training.
2. We construct the reliable guidance for adversarial distillation by flexibly utilizing the robust knowledge from the teacher model: (a) if a teacher is good at adversarial data, its soft labels can be fully trusted; (b) if a teacher is good at natural data but not adversarial data, its soft
labels should be partially trusted and the student also takes its own soft labels into account; (c) otherwise, the student only relies on its own soft labels.
3. We propose an Introspective Adversarial Distillation (IAD) to automatically realize the intuition of the previous reliable guidance during the adversarial distillation. The experimental results confirmed that our approach can improve adversarial robustness across a variety of training settings and evaluations, and also on the challenging (consider adversarial robustness) datasets (e.g., CIFAR-100 (Krizhevsky, 2009) and Tiny-ImageNet (Le & Yang, 2015)) or using large models (e.g., WideResNet (Zagoruyko & Komodakis, 2016)).
2 RELATED WORK
2.1 ADVERSARIAL TRAINING.
Adversarial examples (Goodfellow et al., 2015) motivate many defensive approaches developed in the last few years. Among them, adversarial training has been demonstrated as the most effective method to improve the robustness of DNNs (Cai et al., 2018; Wang et al., 2020; Jiang et al., 2020; Chen et al., 2021; Sriramanan et al., 2021). The formulation of the popular AT (Madry et al., 2018) and its variants can be summarized as the minimization of the following loss:
min fθ∈F
1
n n∑ i=1 `(fθ(x̃i), yi), (1)
where n is the number of training examples, x̃i is the adversarial example within the -ball (bounded by an Lp-norm) centered at natural example xi, yi is the associated label, fθ is the DNN with parameter θ and `(·) is the standard classification loss, e.g., the cross-entropy loss. Adversarial training leverages adversarial examples to smooth the small neighborhood, making the model prediction locally invariant. To generate the adversarial examples, AT employs a PGD method (Madry et al., 2018). Concretely, given a sample x(0) ∈ X and the step size β > 0, PGD recursively searches
x̃(t+1) = ΠB[x̃(0)] ( x̃(t) + β sign(∇x̃(t)`(fθ(x̃(t)), y)) ) , (2)
until a certain stopping criterion is satisfied. In Eq. equation 2, t ∈ N, ` is the loss function, x̃(t) is adversarial data at step t, y is the corresponding label for natural data, and ΠB [x̃0](·) is the projection function that projects the adversarial data back into the -ball centered at x̃(0).
2.2 KNOWLEDGE DISTILLATION
The idea of distillation from other models can be dated back to (Craven & Shavlik, 1996), and re-introduced by (Hinton et al., 2015) as knowledge distillation (KD). It has been widely studied in recent years (Yao et al., 2021) and works well in numerous applications like model compression and transfer learning. For adversarial defense, a few studies have explored obtaining adversarial robust models by distillation. Papernot et al. (2016) proposed defensive distillation which utilizes the soft labels produced by a standard pre-trained teacher model, while this method is proved to be not resistant to the C&W attacks (Carlini & Wagner, 2017); Goldblum et al. (2020) combined AT with KD to transfer robustness to student models, and they found that the distilled models can outperform adversarially pre-trained teacher models of identical architecture in terms of adversarial robustness; Chen et al. (2021) utilized distillation as a regularization for adversarial training, which employed robust and standard pre-trained teacher models to address the robust overfitting (Rice et al., 2020).
Nonetheless, all these related methods fully trust teacher models and do not consider that whether the guidance of the teacher model in distillation is reliable or not. In this paper, different from the previous studies, we find that the teacher model in adversarial distillation is not always trustworthy. Formally, adversarial distillation suggests to minimize Ex̃∈B[x] [`kl(S(x̃|τ)||T (x̃|τ))], where T (x̃|τ) is not a constant soft supervision along with the adversarial training and affected by the adversarial data generated by the dynamically evolved student network. Based on that, we propose reliable IAD to encourage student models to partially instead of fully trust teacher models, which effectively utilizes the knowledge from the adversarially pre-trained models.
3 A CLOSER LOOK AT ADVERSARIAL DISTILLATION
In Section 3.1, we discuss the unreliable issue of adversarial distillation, i.e., the guidance of the teacher model is progressively unreliable along with adversarial training. In Section 3.2, we partition the training examples into three parts and analyze them part by part. Specifically, we expect that the student model should partially instead of fully trust the teacher model and gradually trust itself more along with adversarial training.
3.1 FULLY TRUST: PROGRESSIVELY UNRELIABLE GUIDANCE
As aforementioned in the Introduction, previous methods (Goldblum et al., 2020; Chen et al., 2021) fully trust the teacher model when distilling robustness from adversarially pre-trained models. Taking Adversarial Robust Distillation (ARD) (Goldblum et al., 2020) as an example, we illustrate its procedure in the left part of Figure 1(b): the student model generates its adversarial data and then optimizes the prediction of them to mimic the output of the teacher model. However, although the teacher model is well optimized on the adversarial data queried by itself, we argue that it might not always be good at the more and more challenging adversarial data queried by the student model.
As shown in Figure 1(a), different from the ordinary distillation in which the teacher model has the consistent standard performance on the natural data, its robust accuracy on the student model’s adversarial data is decreasing during distillation. The guidance of the teacher model gradually fails to give the correct output on the adversarial data queried by the student model.
3.2 PARTIALLY TRUST: CONSTRUCTION OF RELIABLE GUIDANCE
The unreliable issue of the teacher model in adversarial distillation raises the challenge of how to conduct reliable adversarial distillation with unreliable teachers? Intuitively, this requires us to re-consider the guidance of adversarially pre-trained models along with the adversarial training. For simplicity, we use T (x) (T (x̃)) to represent the predicted label of the teacher model on the natural (adversarial) examples, and use y to represent the targeted label. We partition the adversarial samples into following parts as shown in the toy illustration (Figure 2(a)), and analyze them part by part.
1) T (x) = y ∩ T (x̃) = y: It can be seen in Figure 2(a) that this part of data whose adversarial variants like x′1 is the most trustworthy among the three parts, since the teacher model performs well on both natural and adversarial data. In this case, we could choose to trust the guidance of the teacher model on this part of the data. However, as shown in Figure 2(b), we find that the sample number of this part is decreasing along with the adversarial training. That is, what we can rely on from the teacher model in adversarial distillation is progressively reduced.
2) T (x) = y ∩ T (x̃) 6= y: In Figure 2(b), we also check the number change of the part of data whose adversarial variants like x′′1 . Corresponding to the previous category, the number of this kind of data is increasing during distillation. Since the teacher model’s outputs on the small neighborhood of the queried natural data are not always correct, its knowledge may not be robust and the guidance for the student model is not reliable. Think back to the reason for the decrease in the robust accuracy of the teacher model, the student model itself may also be trustworthy since it becomes gradually adversarial robust during distillation.
3) T (x) 6= y∩T (x̃) 6= y: As for the data which are like x2 in Figure 2(a), the guidance of the teacher model is totally unreliable since the predicted labels on the natural data are wrong. The student model may also trust itself to encourage the outputs to mimic that of their natural data rather than the wrong outputs from the teacher model. First, it removes the potential threat that the teacher’s guidance may be a kind of noisy labels for training. Second, as an adversarial regularization (Zhang et al., 2019), it can improve the model robustness through enhancing the stability of the model’s outputs on the natural and the corresponding adversarial data.
4) T (x) 6= y ∩ T (x̃) = y: Considering the generation process of the adversarial data, i.e., x̃∗ = arg maxx̃∈B (x) `(f(x̃), y), Once the original prediction is wrong, i.e.,, T (x) 6= y, the generation of x̃∗ only make the prediction worse. Thus, this group doesn’t exist.
To sum up, we suggest employing reliable guidance from the teacher model and encouraging the student model to trust itself more as the teacher model’s guidance being progressively unreliable and the student model gradually becoming more adversarially robust.
4 INTROSPECTIVE ADVERSARIAL DISTILLATION
Based on previous analysis about the adversarial distillation, we propose the Introspective Adversarial Distillation (IAD) to better utilize the guidance from the adversarially pre-trained model. Concretely, we have the following KD-style loss, but composite with teacher guidance and student introspection.
`IAD = O(ADi;α)︸ ︷︷ ︸ Label & Teacher Guidance +γ `KL(S(x̃|τ)||S(x|τ))︸ ︷︷ ︸ Student Introspection ), (3)
where O(ADi;α) is the previous adversarial distillation baseline, e.g., ARD (Goldblum et al., 2020) or AKD2 (Chen et al., 2021), weighted by the hyper-parameter α1, γ is a weight for the student introspection, S(·|τ) is a Softmax operator with the temperature τ ∈ (0,+∞), e.g., S(xk|τ) = exp(xk/τ)∑ k′ exp(xk′/τ)
, S(·) is the conventional Softmax with the temperature τ = 1, T (·|τ) is the tempered variant of the teacher output T (·), x̃ is the adversarial data generated from the natural data x, y is the hard label and `CE is the Cross Entropy loss and `KL is the KL-divergence loss. As for the annealing parameter α ∈ [0, 1] that is used to balance the effect of the teacher model in adversarial distillation, based on the analysis about the reliability of adversarial supervision in Section 3, we define it as,
α = (PT (x̃|y))β , (4)
where PT (·|y) is the prediction probability of the teacher model about the targeted label y and β is a hyperparameter to sharpen the prediction. The intuition behind IAD is to calibrate the guidance from the teacher model automatically based on the prediction of adversarial data. Our α naturally corresponds to the construction in Section 3.2, since the prediction probability of the teacher model for the adversarial data can well represent the categorical information. As for β, we have plot the specific values of α under its adjustment in the left of Figure 4.
Intuitively, the student model can trust the teacher model when α approaches 1, which means that the teacher model is good at both natural and adversarial data. However, when α approaches 0, it corresponds that the teacher model is good at natural but not adversarial data, or even not good at both, and thus the student model should take its self-introspection into account. In Figure 3, we check the reliability of the student model itself. According to the left panel of Figure 3, we can see that the student model is progressively robust to the adversarial data. And if we incorporate the student introspection into the adversarial distillation, the results in the middle of Figure 3 confirms its
1Note that, we do not use α when ADi is ARD. Please refer to the Appendix A.2 for the ablation study.
Algorithm 1 Introspective Adversarial Distillation (IAD) Input: student model S, teacher model T , training dataset D = {(xi, yi)}ni=1, learning rate η, number of epochs N , batch size m, number of batches M , temperature parameter τ , the annealing parameter on teacher model’s predicted probability α, adjustable parameter λ, λ1, λ2, λ3 and γ. Output: adversarially robust model Sr for epoch = 1, . . . , N do
for mini-batch = 1, . . . , M do Sample a mini-batch {(xi, yi)}mi=1 from D for i = 1, . . . , m (in parallel) do
Obtain adversarial data x̃i of xi by PGD based on Eq. equation 2. Compute α for each adversarial data based on Eq. equation 4.
end for IAD-I: θ ← θ − η∇θ {
λ`CE(S(x), y) + (1− λ) · τ2· (`KL(S(x̃|τ)||T (x|τ)) + γ`KL(S(x̃|τ)||S(x|τ))) } or
IAD-II: θ ← θ−η∇θ λ1`CE(S(x̃), y) + λ2 · τ2· (α · `KL(S(x̃|τ)||Tat(x̃|τ))+λ3τ2`KL(S(x̃|τ)||Tst(x̃|τ))+
γ`KL(S(x̃|τ)||S(x|τ))) end for
end for
potential benefits to improve the accuracy of the guidance. Moreover, as shown in the right panel of Figure 3, adding self-introspection results in better improvement in model robustness compared to only using the guidance of the teacher model. Therefore, `IAD automatically encourages the outputs of the student model to mimic more reliable guidance in adversarial distillation.
Algorithm 1 summarizes the implementation of Introspective Adversarial Distillation (IAD). Specifically, IAD first leverages PGD to generate the adversarial data for the student model. Secondly, IAD computes the outputs of the teacher model and the student model on the natural data. Then, IAD mimics the outputs of the student model with that of itself and the teacher model partially based on the probability of the teacher model on the adversarial data.
Warming-up period. During training, we add a warming-up period to activate the student model, where α (in Eq. equation 3) is hardcoded to 1. This is because the student itself is not trustworthy in the early stage (refer to the left panel of Figure 3). Through that, we expect the student model to first evolve into a relatively reliable learner and then conducts the procedure of IAD.
4.1 COMPARISON WITH RELATED METHODS
In this section, we discuss the difference between IAD and other related approaches in the perspective of the loss functions. Table 1 summarizes all of them.
As shown in Table 1, AT (Madry et al., 2018) utilizes the hard labels to supervise adversarial training; TRADES (Zhang et al., 2019) decomposes the loss function of AT into two terms, one for standard training and the other one for adversarial training with the soft supervision; Motivated by KD (Hinton et al., 2015), Goldblum et al. (2020) proposed ARD to conduct the adversarial distillation, which fully
trusts the outputs of the teacher model to learn the student model. As indicated by the experiments in Goldblum et al. (2020), a larger λ resulted in less robust student models. Thus, they generally set λ = 0 in their experiments; Chen et al. (2021) utilized distillation as a regularization to avoid the robust overfitting issue, which employed both the adversarially pre-trained teacher model and the standard pre-trained model. Thus, there are two KL-divergence loss and for simplicity, we term their method as AKD2; Regarding IAD, there are two types of implementations that are respectively based on ARD or AKD2. We term them as IAD-I and IAD-II, and their difference with previous ARD and AKD2 is an additional self-introspection term. Besides, we also apply α to downweight the dependency on the term `KL(S(x̃|τ)||Tat(x̃|τ)), which has been explained in previous sections.
5 EXPERIMENTS
We conduct extensive experiments to evaluate the effectiveness of IAD. In Section 5.1, we compare IAD with benchmark adversarial training methods (AT and TRADES) and some related methods which utilize adversarially pre-trained models via KD (ARD and AKD2) on CIFAR-10/CIFAR100 (Krizhevsky, 2009) datasets. In Section 5.2, we compare the previous methods with IAD on a more challenging dataset Tiny-ImageNet (Le & Yang, 2015). In Section 5.3, the ablation studies are conducted to analyze the effects of the hyper-parameter β and different warming-up periods for IAD.
Regarding the measures, we compute the natural accuracy on the natural test data and the robust accuracy on the adversarial test data generated by FGSM, PGD-20, and C&W attacks following (Wang et al., 2019) Moreover, we estimate the performance under AutoAttack (termed as AA).
5.1 EVALUATION ON CIFAR-10/CIFAR-100 DATASETS
Experiment Setup. In this part, we follow the setup (learning rate, optimizer, weight decay, momentum) of Goldblum et al. (2020) to implement the adversarial distillation experiments on the CIFAR-10/CIFAR-100 datasets. Specifically, we train ResNet-18 under different methods using SGD with 0.9 momentum for 200 epochs. The initial learning rate is 0.1 divided by 10 at Epoch 100 and Epoch 150 respectively, and the weight decay=0.0002. In the settings of adversarial training, we set the perturbation bound = 8/255, the PGD step size σ = 2/255, and PGD step numbers K = 10. In the settings of distillation, we use τ = 1 and use models pre-trained by AT and TRADES which have the best PGD-10 test accuracy as the teacher models for ARD, AKD2 and our IAD. For ARD, we set its hyper-parameter λ = 0 as recommend in Goldblum et al. (2020) for gaining better robustness. For AKD2, we set λ1 = 0.25, λ2 = 0.5 and λ3 = 0.25 as recommanded in Chen et al. (2021). For IAD-I and IAD-II, we respectively set the warming-up period as 60/80 and 40/80 epochs to train on
CIFAR-10/CIFAR-100. Regarding the computation of α, we set λ = 0, β = 0.1. For γ, we currently set γ = 1− α and more ablation study about its setting can be found in the Appendix A.3.
Results. We report the results in Table 2, where the results of AT and TARDES are listed in the first and fifth rows of Table 2, and the other methods use these models as the teacher models in distillation. On CIFAR-10 and CIFAR-100, we note that our IAD-I or IAD-II has obtained consistent improvements on adversarial robustness in terms of PGD-20, CW∞ and AA accuracy compared with the student models distilled by ARD or AKD2 and the adversarially pre-trained teacher models. Besides, IAD-II generally performs better than IAD-I when the teacher is trained by AT, which means AKD2 in this case could be a better starting point than ARD. However, when the teacher model is trained by TRAEDS, the advantage about the robustness of IAD-I over that of IAD-II is in reverse. Considering their distillation philosophy, i.e., `kl(S(x̃|τ)||T (x|τ))) of IAD-I and `kl(S(x̃|τ)||T (x̃|τ))) of IAD-II, it might be up to which of T (x|τ) and T (x̃|τ) is more informative in adversarial distillation from the diverse teachers. The natural accuracy of IAD sometimes is lower than others, but the performance drop is not very significant compared to IAD-II.
Experiment Setup. In this part, we evaluate these methods by the model with a larger capacity, i.e, WideResNet-34-10. The teacher network is trained by AT and TRADES, following the settings of (Zhang et al., 2021). We keep the most settings of baselines same as the previous experiment. For IAD-I and IAD-II, we set 5/10 epochs warming-up period on CIFAR-10/CIFAR-100.
Results. The results is summarized in Tables 3. Similarly, on CIFAR-10 and CIFAR-100, our method can also achieve better model robustness than ARD, AKD2 and the original teacher models. Moreover, our IAD methods do not sacrifice much standard performance compared with the original teacher models. Since AKD2 externally utilizes a standard pre-trained teacher model, IAD-II can achieve the consistently better natural performance than IAD-I. However, in terms of the robustness, IAD-I generally achieves the comparable or even better than IAD-II under both AT and TRADES.
5.2 EVALUATION ON Tiny-ImageNet DATASET
Experiment Setup. In this part, we evaluate these methods on a more challenging Tiny-ImageNet dataset. For these adversarially pre-trained models, we follow the settings of (Chen et al., 2021) to train AT and TRADES. To be specific, we train PreActive-ResNet-18 using SGD with 0.9 momentum
for 100 epochs. The initial learning rate is 0.1 divided by 10 at Epoch 50 and 80 respectively, and the weight decay=0.0005. For distillation baselines, we keep most settings the same as Section 5.1. For ARD and IAD-I, here we set its λ = 0.9 to deal with the complex task following (Goldblum et al., 2020). And for both IAD-I and IAD-II, we use λ = 0.1, β = 0.1 and 10 warming-up epochs.
Results. We report the results in Table 4. Overall, our IAD-I or IAD-II can still achieve better model robustness than other methods. Specifically, on Tiny-ImageNet, IAD-II can approximately improve both the natural accuracy and the robust accuracy compared to IAD-I and other baselines.
5.3 ABLATION STUDIES
To give a comprehensive understanding of our proposed IAD method, we have conducted a series of experiments (in Appendix A), including the ablation study on using different γ related to student introspection, different τ related to adversarial distillation and the loss terms of IAD-II as well as the comparison of the computational cost. In the following, we only study the effect of β and warming-up periods for our student introspection. More ablation study can be found in the Appendix.
Experiment Setup. To understand the effects of different β and different warming-up periods on CIFAR-10 dataset, we conduct the ablation study in this part. Specifically, we choose the ResNet-18 as the backbone model, and keep the experimental settings the same as Section 5.1. In the first experiments, we set no warming-up period and study the effect of using different β. Then, in the second experiments, we set β = 0.1 and investigate different warming-up periods.
Results. We report part of the results on IAD-I method in Figure 4. The complete results with other evaluation metrics as well as that on IAD-II method can be found in Appendix A.1. In Figure 4, we first visualize the values of the α using different β in the left panel, which shows the proportion of the teacher guidance and student introspection in adversarial distillation. The bigger the beta corresponds to a larger proportion of the student introspection. In the middle panel, we plot the natural and PGD-20 accuracy of the student models distilled by different β. We note that the PGD-20 accuracy is improved when the student model trusts itself more with the larger β value. However, the natural accuracy is decreasing along with the increasing of the β value. Similarly, we adjust the length of warming-up periods and check the natural and PGD-20 accuracy in the right panel of Figure 4. We find that setting the student model partially trust itself at the beginning of the training process leads to inadequate robustness improvements. An appropriate warming-up period at the early stage can improve the student model performance on the adversarial examples.
6 CONCLUSION
In this paper, we study distillation from adversarially pre-trained models. We take a closer look at adversarial distillation and discover that the guidance of teacher model is progressively unreliable by considering the robustness. Hence, we explore the construction of reliable guidance in adversarial distillation and propose a method for distillation from unreliable teacher models, i.e., Introspective Adversarial Distillation. Our methods encourages the student model partially instead of fully trust the guidance of the teacher model and gradually trust its self-introspection more to improve robustness.
7 ACKNOWLEDGEMENT
JNZ and BH were supported by the RGC Early Career Scheme No. 22200720, NSFC Young Scientists Fund No. 62006202 and HKBU CSD Departmental Incentive Grant. JCY and HXY was supported by NSFC No. U20A20222. JFZ was supported by JST, ACT-X Grant Number JPMJAX21AF. TLL was supported by Australian Research Council Projects DE-190101473 and DP-220102121. JLX was supported by RGC Grant C6030-18GF.
8 ETHICS STATEMENT
This paper does not raise any ethics concerns. This study does not involve any human subjects, practices to data set releases, potentially harmful insights, methodologies and applications, potential conflicts of interest and sponsorship, discrimination/bias/fairness concerns, privacy and security issues, legal compliance, and research integrity issues.
9 REPRODUCIBILITY STATEMENT
To ensure the reproducibility of experimental results, our code is available at https://github. com/ZFancy/IAD.
A COMPREHENSIVE EXPERIMENTS
A.1 COMPLETE RESULTS OF ABLATION STUDIES.
In this section, we report the complete results of our ablation studies in Tables 5 (about β) and 6 (about warming-up periods).
In Table 5, we can see that the natural and FGSM accuracy will decrease, and the robust accuracy (PGD-20, CW∞, AA) will increase with the rise of β. In Table 6, we adjust the length of warming-up periods. We can see that letting the student network partially trust itself at the beginning of the training process would result in inadequate robustness improvements. In summary, we can find that there is a trade-off in Table 5 to achieve both larger natural accuracy and larger robustness in adversarial distillation, which is also similar to the standard adversarial training. We can slightly sacrifice the robustness (adjust the β or adjust the warming-up periods) to acquire a better natural accuracy and FGSM.
A.2 FULLY TRUST OR PARTIALLY TRUST THE TEACHER GUIDANCE
In this section, we compare the method with fully trusted or partially trusted teacher guidance. Regarding the variant of AKD2 that replaces the second term in AKD2 by our ”partially trust“ KL term (but without the introspection term), we find the robust improvement in Table 7.
In the ”partially trust” variant of AKD2, we just down weight the part of teacher guidance which has wrong prediction results with the hard labels. The results show that fit this part ”unreliable” teacher guidance may improve the natural performance but consistently drop the robust accuracy.
In the following, we also conduct a range of experiments to compare IAD-I and its variant IAD-IDown that applys our downweighting on `kl(S(x̃|τ)||T (x|τ)) rather than a constant 1.0 like ARD. According to the results, we can see that IAD-I consistently achieves better robustness while sacrifices a little natural accuracy. Hence, we will choose the constant 1.0 on `kl(S(x̃|τ)||T (x|τ)) for ARD in the main part of our experiments on those benchmark datasets.
A.3 THE EFFECTS OF γ FOR STUDENT INTROSPECTION
In this section, we check the effects of the Student Introspection via adjusting the γ for our IAD methods. Here, we also use the constant γ in the experiments. We find that using the constant γ can further boost the performance on model robustness, but there is another problem that the natural accuracy will sacrifice a little bit more than our previous dynamical design, i.e., γ = 1− α.
According to the results in Table 9, we can hardly to find an optimal coefficient for the student introspection term to achieve both the best natural accuracy and the best robustness. However,
there is one trend that increasing the coefficient will gain more robustness with losing more natural accuracy. With the hyper-parameter γ, we may flexibly instantiated by some constants or some strategic schedules to pursue the robustness or the natural accuracy.
A.4 THE EFFECTS OF τ FOR ADVERSARIAL DISTILLATION
In this section, we check the effects of τ for adversarial distillation under the same training configurations and also for TRADES. We have listed the results of TRADES in Tabel 10, and the results of IAD-I and ARD in Table 11.
As for TRADES, we think it may not need the temperature τ which is especially designed for distillation, since the KL term in TRADES is aims to encourage the output of the model on adversarial data to being the similar with that on natural data. Intuitively, it encourage the output to be more stable which leads to better robust performance. As a result, based on the test accuracy in Table 10, τ = 1 maybe the best for TRADES to achieve a better robustness and enlarging (or decreasing) τ can disturb the original output logits and may result in lower robustness (i.e., τ = 0.1: PGD-20 43.26% ).
According to the comparison in Table 11, with the proper τ (e.g., τ ≥ 1), IAD-I can achieve the comparable natural accuracy and better robustness. Note that in the experiments of IAD-I, we keep the τ = 1 for the student introspection term according to the results of TRADES in Table 10.
A.5 COMPARISON WITH TRADES AND ARD
In this section, we compared the performance of TRADES, ARD and our IAD-I with different weights in their each loss terms. We summarized the results in Table 12.
According to the results, IAD-I can approximately achieve the better robust accuracy than ARD, and the better natural accuracy than TRADES with less drop on PGD-20 accuracy. With varying the
hyper-parameters on the loss terms, we also find that natural accuracy can reach to 85.94% at a loss of robustness to 48.58% and robustness can reach to 55.89% at a loss of natural accuracy to 74.95%. In summary, through adjust those weight, our IAD-I can flexibly make a good balance among them by considering both the natural accuracy and the robustness.
A.6 ABLATION STUDY ON THE LOSS TERMS OF IAD-II
In this section, we conduct the ablation study in Table 13 about the four loss terms in IAD-II which is based on AKD2.
Specifically, according to the results, we find `KL(S(x̃|τ)||Tst(x̃|τ)) can help model gain more natural accuracy, while `KL(S(x̃|τ)||Tat(x̃|τ)) can help model gain more robustness. AKD2 achieves a good balance between two aspects by combining above two terms, and IAD-II further boosts the model robustness by incorporating the self-introspection term with less drop in natural performance.
A.7 COMPUTATIONAL COST COMPARISON
In this section, we check and compare the computational cost of ARD, AKD2, IAD-I and IAD-II in terms of the averaged training time in each epoch, as well as the GPU Memory-Usage. The detailed results are summarized in Table 14.
According to the results, IAD-I and IAD-II correspondingly consume a bit more time and memory than ARD and AKD2 due to the additional self-introspection. Besides, an interesting phenomenon is that ARD has less terms than AKD2, but consumes more time and GPU memory. This is because ARD has to deal with both x and x̃, while AKD2 only needs to deal with x̃. | 1. What is the focus and contribution of the paper regarding adversarial distillation?
2. What are the strengths of the proposed approach, particularly in improving model robustness?
3. Do you have any concerns about the paper, such as the analysis of adversarial data or the comparison with other methods?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper studies a specific distillation scenario, adversarial distillation, to improve the model robustness. Different from the constant sort supervision in the ordinary distillation, the teacher model will become progressively unreliable along with training, since the adversarial data are dynamically searched by the student model and might not be well identified by the teacher model. Therefore, the authors introduce an introspective adversarial distillation, which considers to bootstrap the learning from both the teacher model and itself. A range of experiments demonstrate its superiority on improving the model robustness.
Review
In summary, the discovery of the specific degeneration in adversarial distillation is interesting, and the authors design an elaborate loss that takes such a drawback in the adversarial distillation into account. A range of experiments on different network architectures, adversarial training backbones (AT or TRADES) and three widely-used datasets demonstrate its superiority in improving the model robustness. Besides, the ablation study regarding the annealing temperature and the warming up has been conducted to provide the view on its working mechanism.
However, there are still some concerns about this work. Major Concerns
In Section 3.2, the authors analyze the adversarial data by partitioning them into three groups. Is there the fourth group where the teacher model has wrong predictions on natural data but right prediction on its adversarial counterpart? If it almost does not exist, it will be complete that the authors at least mention this in the paragraph. Besides, for the third group, it is not absolutely consistent with the case of x_2 where it also requires the wrong prediction of the student model.
In Section 4, Eq.(4) can approximately reflect the confidence on the adversarial data. But have the authors considered the difference between group 2 and group 3 in Eq.(4), since they may have different scale and corresponds to different cases.
In fact, the proposed Introspective Adversarial Distillation can degenerate to TRADES and ARD by setting some extreme hyperparameters. It will be better to conduct an experiment to plot a accuracy (natural acc and robust acc) surface with varying hyperparameters that includes these three methods, which helps us to better understand the position your method.
Minor Concerns
Table 1 is messy. The authors could list each distance term as a column name and compare baselines by considering it or not.
The experiments of IAD + AKD^2 is not well explained. I guess it is to show how IAD can help improve the natural accuracy but the authors have not clarified this. |
ICLR | Title
Reliable Adversarial Distillation with Unreliable Teachers
Abstract
In ordinary distillation, student networks are trained with soft labels (SLs) given by pretrained teacher networks, and students are expected to improve upon teachers since SLs are stronger supervision than the original hard labels. However, when considering adversarial robustness, teachers may become unreliable and adversarial distillation may not work: teachers are pretrained on their own adversarial data, and it is too demanding to require that teachers are also good at every adversarial data queried by students. Therefore, in this paper, we propose reliable introspective adversarial distillation (IAD) where students partially instead of fully trust their teachers. Specifically, IAD distinguishes between three cases given a query of a natural data (ND) and the corresponding adversarial data (AD): (a) if a teacher is good at AD, its SL is fully trusted; (b) if a teacher is good at ND but not AD, its SL is partially trusted and the student also takes its own SL into account; (c) otherwise, the student only relies on its own SL. Experiments demonstrate the effectiveness of IAD for improving upon teachers in terms of adversarial robustness.
1 INTRODUCTION
Deep Neural Networks (DNNs) have shown excellent performance on a range of tasks in computer vision (He et al., 2016) and natural language processing (Devlin et al., 2019). Nevertheless, Szegedy et al. (2014); Goodfellow et al. (2015) demonstrated that DNNs could be easily fooled by adding a small number of perturbations on natural examples, which increases the concerns on the robustness of DNNs in the trustworthy-sensitive areas, e.g., finance (Kumar et al., 2020) and autonomous driving (Litman, 2017). To overcome this problem, adversarial training (Goodfellow et al., 2015; Madry et al., 2018) is proposed and has shown effectiveness to acquire the adversarially robust DNNs.
Most existing adversarial training approaches focus on learning from data directly. For example, the popular adversarial training (AT) (Madry et al., 2018) leverages multi-step projected gradient descent (PGD) to generate the adversarial examples and feed them into the standard training. Zhang et al. (2019) developed TRADES on the basis of AT to balance the standard accuracy and robust performance. Recently, there are several methods under this paradigm are developed to improve the model robustness (Wang et al., 2019; Alayrac et al., 2019; Carmon et al., 2019; Zhang et al., 2020; Jiang et al., 2020; Ding et al., 2020; Du et al., 2021; Zhang et al., 2021). However, directly learning from the adversarial examples is a challenging task on the complex datasets since the loss with hard labels is difficult to be optimized (Liu et al., 2020), which limits us to achieve higher robust accuracy.
To mitigate this issue, one emerging direction is distilling robustness from the adversarially pretrained model intermediately, which has shown promise in the recent study (Zi et al., 2021; Shu et al., 2021). For example, Ilyas et al. (2019) used an adversarially pre-trained model to build a “robustified” dataset to learn a robust DNN. Fan et al. (2021); Salman et al. (2020) explored to boost the model
† Corresponding author (bhanml@comp.hkbu.edu.hk).
robustness through fine-tuning or transfer learning from adversarially pre-trained models. Goldblum et al. (2020)and Chen et al. (2021) investigated distilling the robustness from adversarially pre-trained models, termed as adversarial distillation for simplicity, where they encouraged student models to mimic the outputs (i.e., soft labels) of the adversarially pre-trained teachers.
However, one critical difference is: in the conventional distillation, the teacher model and the student model share the natural training data; while in the adversarial distillation, the adversarial training data of the student model and that of the teacher model are egocentric (respectively generated by themselves) and becoming more adversarial challenging during training. Given this distinction, are the soft labels acquired from the teacher model in adversarial distillation always reliable and informative guidance? To answer this question, we take a closer look at the process of adversarial distillation. As shown in Figure 1(a), we discover that along with the training, the teacher model progressively fails to give a correct prediction for the adversarial data queried by the student model. The reason could be that with the students being more adversarially robust and thus the adversarial data being harder, it is too demanding to require the teachers become always good at every adversarial data queried by the student model, as the teacher model has never seen these data in its pre-training. In contrast, for the conventional distillation, student models are expected to distill the “static” knowledge from the teacher model, since the soft labels for the natural data from the teacher model are always fixed.
The observation in Figure 1(a) raises the challenge: how to conduct reliable adversarial distillation with unreliable teachers? To solve this problem, we can categorize the training data according to the prediction on natural and student generated adversarial data into three cases. First, if the teacher model can correctly classify both natural and adversarial data, it is reliable; Second, if the teacher model can correctly classify the natural but not adversarial data, it should be partially trusted, and the student model is suggested to trust itself to enhance model robustness as the adversarial regularization (Zhang et al., 2019); Third, if the teacher model cannot correctly classify both natural and adversarial data, the student model is recommended to trust itself totally. According to this intuition, we propose an Introspective Adversarial Distillation (IAD) to effectively utilize the knowledge from an adversarially pre-trained teacher model. The framework of our proposed IAD can be seen in Figure 1(b). Briefly, the student model is encouraged to partially instead of fully trust the teacher model, and gradually trust itself more as being more adversarially robust. We conduct extensive experiments on the benchmark CIFAR-10/CIFAR-100 and the more challenging Tiny-ImageNet datasets to evaluate the efficiency of our IAD. The main contributions of our work can be summarized as follows.
1. We take a closer look at adversarial distillation under the teacher-student paradigm. Considering adversarial robustness, we discover that the guidance from the teacher model is progressively unreliable along with the adversarial training.
2. We construct the reliable guidance for adversarial distillation by flexibly utilizing the robust knowledge from the teacher model: (a) if a teacher is good at adversarial data, its soft labels can be fully trusted; (b) if a teacher is good at natural data but not adversarial data, its soft
labels should be partially trusted and the student also takes its own soft labels into account; (c) otherwise, the student only relies on its own soft labels.
3. We propose an Introspective Adversarial Distillation (IAD) to automatically realize the intuition of the previous reliable guidance during the adversarial distillation. The experimental results confirmed that our approach can improve adversarial robustness across a variety of training settings and evaluations, and also on the challenging (consider adversarial robustness) datasets (e.g., CIFAR-100 (Krizhevsky, 2009) and Tiny-ImageNet (Le & Yang, 2015)) or using large models (e.g., WideResNet (Zagoruyko & Komodakis, 2016)).
2 RELATED WORK
2.1 ADVERSARIAL TRAINING.
Adversarial examples (Goodfellow et al., 2015) motivate many defensive approaches developed in the last few years. Among them, adversarial training has been demonstrated as the most effective method to improve the robustness of DNNs (Cai et al., 2018; Wang et al., 2020; Jiang et al., 2020; Chen et al., 2021; Sriramanan et al., 2021). The formulation of the popular AT (Madry et al., 2018) and its variants can be summarized as the minimization of the following loss:
min fθ∈F
1
n n∑ i=1 `(fθ(x̃i), yi), (1)
where n is the number of training examples, x̃i is the adversarial example within the -ball (bounded by an Lp-norm) centered at natural example xi, yi is the associated label, fθ is the DNN with parameter θ and `(·) is the standard classification loss, e.g., the cross-entropy loss. Adversarial training leverages adversarial examples to smooth the small neighborhood, making the model prediction locally invariant. To generate the adversarial examples, AT employs a PGD method (Madry et al., 2018). Concretely, given a sample x(0) ∈ X and the step size β > 0, PGD recursively searches
x̃(t+1) = ΠB[x̃(0)] ( x̃(t) + β sign(∇x̃(t)`(fθ(x̃(t)), y)) ) , (2)
until a certain stopping criterion is satisfied. In Eq. equation 2, t ∈ N, ` is the loss function, x̃(t) is adversarial data at step t, y is the corresponding label for natural data, and ΠB [x̃0](·) is the projection function that projects the adversarial data back into the -ball centered at x̃(0).
2.2 KNOWLEDGE DISTILLATION
The idea of distillation from other models can be dated back to (Craven & Shavlik, 1996), and re-introduced by (Hinton et al., 2015) as knowledge distillation (KD). It has been widely studied in recent years (Yao et al., 2021) and works well in numerous applications like model compression and transfer learning. For adversarial defense, a few studies have explored obtaining adversarial robust models by distillation. Papernot et al. (2016) proposed defensive distillation which utilizes the soft labels produced by a standard pre-trained teacher model, while this method is proved to be not resistant to the C&W attacks (Carlini & Wagner, 2017); Goldblum et al. (2020) combined AT with KD to transfer robustness to student models, and they found that the distilled models can outperform adversarially pre-trained teacher models of identical architecture in terms of adversarial robustness; Chen et al. (2021) utilized distillation as a regularization for adversarial training, which employed robust and standard pre-trained teacher models to address the robust overfitting (Rice et al., 2020).
Nonetheless, all these related methods fully trust teacher models and do not consider that whether the guidance of the teacher model in distillation is reliable or not. In this paper, different from the previous studies, we find that the teacher model in adversarial distillation is not always trustworthy. Formally, adversarial distillation suggests to minimize Ex̃∈B[x] [`kl(S(x̃|τ)||T (x̃|τ))], where T (x̃|τ) is not a constant soft supervision along with the adversarial training and affected by the adversarial data generated by the dynamically evolved student network. Based on that, we propose reliable IAD to encourage student models to partially instead of fully trust teacher models, which effectively utilizes the knowledge from the adversarially pre-trained models.
3 A CLOSER LOOK AT ADVERSARIAL DISTILLATION
In Section 3.1, we discuss the unreliable issue of adversarial distillation, i.e., the guidance of the teacher model is progressively unreliable along with adversarial training. In Section 3.2, we partition the training examples into three parts and analyze them part by part. Specifically, we expect that the student model should partially instead of fully trust the teacher model and gradually trust itself more along with adversarial training.
3.1 FULLY TRUST: PROGRESSIVELY UNRELIABLE GUIDANCE
As aforementioned in the Introduction, previous methods (Goldblum et al., 2020; Chen et al., 2021) fully trust the teacher model when distilling robustness from adversarially pre-trained models. Taking Adversarial Robust Distillation (ARD) (Goldblum et al., 2020) as an example, we illustrate its procedure in the left part of Figure 1(b): the student model generates its adversarial data and then optimizes the prediction of them to mimic the output of the teacher model. However, although the teacher model is well optimized on the adversarial data queried by itself, we argue that it might not always be good at the more and more challenging adversarial data queried by the student model.
As shown in Figure 1(a), different from the ordinary distillation in which the teacher model has the consistent standard performance on the natural data, its robust accuracy on the student model’s adversarial data is decreasing during distillation. The guidance of the teacher model gradually fails to give the correct output on the adversarial data queried by the student model.
3.2 PARTIALLY TRUST: CONSTRUCTION OF RELIABLE GUIDANCE
The unreliable issue of the teacher model in adversarial distillation raises the challenge of how to conduct reliable adversarial distillation with unreliable teachers? Intuitively, this requires us to re-consider the guidance of adversarially pre-trained models along with the adversarial training. For simplicity, we use T (x) (T (x̃)) to represent the predicted label of the teacher model on the natural (adversarial) examples, and use y to represent the targeted label. We partition the adversarial samples into following parts as shown in the toy illustration (Figure 2(a)), and analyze them part by part.
1) T (x) = y ∩ T (x̃) = y: It can be seen in Figure 2(a) that this part of data whose adversarial variants like x′1 is the most trustworthy among the three parts, since the teacher model performs well on both natural and adversarial data. In this case, we could choose to trust the guidance of the teacher model on this part of the data. However, as shown in Figure 2(b), we find that the sample number of this part is decreasing along with the adversarial training. That is, what we can rely on from the teacher model in adversarial distillation is progressively reduced.
2) T (x) = y ∩ T (x̃) 6= y: In Figure 2(b), we also check the number change of the part of data whose adversarial variants like x′′1 . Corresponding to the previous category, the number of this kind of data is increasing during distillation. Since the teacher model’s outputs on the small neighborhood of the queried natural data are not always correct, its knowledge may not be robust and the guidance for the student model is not reliable. Think back to the reason for the decrease in the robust accuracy of the teacher model, the student model itself may also be trustworthy since it becomes gradually adversarial robust during distillation.
3) T (x) 6= y∩T (x̃) 6= y: As for the data which are like x2 in Figure 2(a), the guidance of the teacher model is totally unreliable since the predicted labels on the natural data are wrong. The student model may also trust itself to encourage the outputs to mimic that of their natural data rather than the wrong outputs from the teacher model. First, it removes the potential threat that the teacher’s guidance may be a kind of noisy labels for training. Second, as an adversarial regularization (Zhang et al., 2019), it can improve the model robustness through enhancing the stability of the model’s outputs on the natural and the corresponding adversarial data.
4) T (x) 6= y ∩ T (x̃) = y: Considering the generation process of the adversarial data, i.e., x̃∗ = arg maxx̃∈B (x) `(f(x̃), y), Once the original prediction is wrong, i.e.,, T (x) 6= y, the generation of x̃∗ only make the prediction worse. Thus, this group doesn’t exist.
To sum up, we suggest employing reliable guidance from the teacher model and encouraging the student model to trust itself more as the teacher model’s guidance being progressively unreliable and the student model gradually becoming more adversarially robust.
4 INTROSPECTIVE ADVERSARIAL DISTILLATION
Based on previous analysis about the adversarial distillation, we propose the Introspective Adversarial Distillation (IAD) to better utilize the guidance from the adversarially pre-trained model. Concretely, we have the following KD-style loss, but composite with teacher guidance and student introspection.
`IAD = O(ADi;α)︸ ︷︷ ︸ Label & Teacher Guidance +γ `KL(S(x̃|τ)||S(x|τ))︸ ︷︷ ︸ Student Introspection ), (3)
where O(ADi;α) is the previous adversarial distillation baseline, e.g., ARD (Goldblum et al., 2020) or AKD2 (Chen et al., 2021), weighted by the hyper-parameter α1, γ is a weight for the student introspection, S(·|τ) is a Softmax operator with the temperature τ ∈ (0,+∞), e.g., S(xk|τ) = exp(xk/τ)∑ k′ exp(xk′/τ)
, S(·) is the conventional Softmax with the temperature τ = 1, T (·|τ) is the tempered variant of the teacher output T (·), x̃ is the adversarial data generated from the natural data x, y is the hard label and `CE is the Cross Entropy loss and `KL is the KL-divergence loss. As for the annealing parameter α ∈ [0, 1] that is used to balance the effect of the teacher model in adversarial distillation, based on the analysis about the reliability of adversarial supervision in Section 3, we define it as,
α = (PT (x̃|y))β , (4)
where PT (·|y) is the prediction probability of the teacher model about the targeted label y and β is a hyperparameter to sharpen the prediction. The intuition behind IAD is to calibrate the guidance from the teacher model automatically based on the prediction of adversarial data. Our α naturally corresponds to the construction in Section 3.2, since the prediction probability of the teacher model for the adversarial data can well represent the categorical information. As for β, we have plot the specific values of α under its adjustment in the left of Figure 4.
Intuitively, the student model can trust the teacher model when α approaches 1, which means that the teacher model is good at both natural and adversarial data. However, when α approaches 0, it corresponds that the teacher model is good at natural but not adversarial data, or even not good at both, and thus the student model should take its self-introspection into account. In Figure 3, we check the reliability of the student model itself. According to the left panel of Figure 3, we can see that the student model is progressively robust to the adversarial data. And if we incorporate the student introspection into the adversarial distillation, the results in the middle of Figure 3 confirms its
1Note that, we do not use α when ADi is ARD. Please refer to the Appendix A.2 for the ablation study.
Algorithm 1 Introspective Adversarial Distillation (IAD) Input: student model S, teacher model T , training dataset D = {(xi, yi)}ni=1, learning rate η, number of epochs N , batch size m, number of batches M , temperature parameter τ , the annealing parameter on teacher model’s predicted probability α, adjustable parameter λ, λ1, λ2, λ3 and γ. Output: adversarially robust model Sr for epoch = 1, . . . , N do
for mini-batch = 1, . . . , M do Sample a mini-batch {(xi, yi)}mi=1 from D for i = 1, . . . , m (in parallel) do
Obtain adversarial data x̃i of xi by PGD based on Eq. equation 2. Compute α for each adversarial data based on Eq. equation 4.
end for IAD-I: θ ← θ − η∇θ {
λ`CE(S(x), y) + (1− λ) · τ2· (`KL(S(x̃|τ)||T (x|τ)) + γ`KL(S(x̃|τ)||S(x|τ))) } or
IAD-II: θ ← θ−η∇θ λ1`CE(S(x̃), y) + λ2 · τ2· (α · `KL(S(x̃|τ)||Tat(x̃|τ))+λ3τ2`KL(S(x̃|τ)||Tst(x̃|τ))+
γ`KL(S(x̃|τ)||S(x|τ))) end for
end for
potential benefits to improve the accuracy of the guidance. Moreover, as shown in the right panel of Figure 3, adding self-introspection results in better improvement in model robustness compared to only using the guidance of the teacher model. Therefore, `IAD automatically encourages the outputs of the student model to mimic more reliable guidance in adversarial distillation.
Algorithm 1 summarizes the implementation of Introspective Adversarial Distillation (IAD). Specifically, IAD first leverages PGD to generate the adversarial data for the student model. Secondly, IAD computes the outputs of the teacher model and the student model on the natural data. Then, IAD mimics the outputs of the student model with that of itself and the teacher model partially based on the probability of the teacher model on the adversarial data.
Warming-up period. During training, we add a warming-up period to activate the student model, where α (in Eq. equation 3) is hardcoded to 1. This is because the student itself is not trustworthy in the early stage (refer to the left panel of Figure 3). Through that, we expect the student model to first evolve into a relatively reliable learner and then conducts the procedure of IAD.
4.1 COMPARISON WITH RELATED METHODS
In this section, we discuss the difference between IAD and other related approaches in the perspective of the loss functions. Table 1 summarizes all of them.
As shown in Table 1, AT (Madry et al., 2018) utilizes the hard labels to supervise adversarial training; TRADES (Zhang et al., 2019) decomposes the loss function of AT into two terms, one for standard training and the other one for adversarial training with the soft supervision; Motivated by KD (Hinton et al., 2015), Goldblum et al. (2020) proposed ARD to conduct the adversarial distillation, which fully
trusts the outputs of the teacher model to learn the student model. As indicated by the experiments in Goldblum et al. (2020), a larger λ resulted in less robust student models. Thus, they generally set λ = 0 in their experiments; Chen et al. (2021) utilized distillation as a regularization to avoid the robust overfitting issue, which employed both the adversarially pre-trained teacher model and the standard pre-trained model. Thus, there are two KL-divergence loss and for simplicity, we term their method as AKD2; Regarding IAD, there are two types of implementations that are respectively based on ARD or AKD2. We term them as IAD-I and IAD-II, and their difference with previous ARD and AKD2 is an additional self-introspection term. Besides, we also apply α to downweight the dependency on the term `KL(S(x̃|τ)||Tat(x̃|τ)), which has been explained in previous sections.
5 EXPERIMENTS
We conduct extensive experiments to evaluate the effectiveness of IAD. In Section 5.1, we compare IAD with benchmark adversarial training methods (AT and TRADES) and some related methods which utilize adversarially pre-trained models via KD (ARD and AKD2) on CIFAR-10/CIFAR100 (Krizhevsky, 2009) datasets. In Section 5.2, we compare the previous methods with IAD on a more challenging dataset Tiny-ImageNet (Le & Yang, 2015). In Section 5.3, the ablation studies are conducted to analyze the effects of the hyper-parameter β and different warming-up periods for IAD.
Regarding the measures, we compute the natural accuracy on the natural test data and the robust accuracy on the adversarial test data generated by FGSM, PGD-20, and C&W attacks following (Wang et al., 2019) Moreover, we estimate the performance under AutoAttack (termed as AA).
5.1 EVALUATION ON CIFAR-10/CIFAR-100 DATASETS
Experiment Setup. In this part, we follow the setup (learning rate, optimizer, weight decay, momentum) of Goldblum et al. (2020) to implement the adversarial distillation experiments on the CIFAR-10/CIFAR-100 datasets. Specifically, we train ResNet-18 under different methods using SGD with 0.9 momentum for 200 epochs. The initial learning rate is 0.1 divided by 10 at Epoch 100 and Epoch 150 respectively, and the weight decay=0.0002. In the settings of adversarial training, we set the perturbation bound = 8/255, the PGD step size σ = 2/255, and PGD step numbers K = 10. In the settings of distillation, we use τ = 1 and use models pre-trained by AT and TRADES which have the best PGD-10 test accuracy as the teacher models for ARD, AKD2 and our IAD. For ARD, we set its hyper-parameter λ = 0 as recommend in Goldblum et al. (2020) for gaining better robustness. For AKD2, we set λ1 = 0.25, λ2 = 0.5 and λ3 = 0.25 as recommanded in Chen et al. (2021). For IAD-I and IAD-II, we respectively set the warming-up period as 60/80 and 40/80 epochs to train on
CIFAR-10/CIFAR-100. Regarding the computation of α, we set λ = 0, β = 0.1. For γ, we currently set γ = 1− α and more ablation study about its setting can be found in the Appendix A.3.
Results. We report the results in Table 2, where the results of AT and TARDES are listed in the first and fifth rows of Table 2, and the other methods use these models as the teacher models in distillation. On CIFAR-10 and CIFAR-100, we note that our IAD-I or IAD-II has obtained consistent improvements on adversarial robustness in terms of PGD-20, CW∞ and AA accuracy compared with the student models distilled by ARD or AKD2 and the adversarially pre-trained teacher models. Besides, IAD-II generally performs better than IAD-I when the teacher is trained by AT, which means AKD2 in this case could be a better starting point than ARD. However, when the teacher model is trained by TRAEDS, the advantage about the robustness of IAD-I over that of IAD-II is in reverse. Considering their distillation philosophy, i.e., `kl(S(x̃|τ)||T (x|τ))) of IAD-I and `kl(S(x̃|τ)||T (x̃|τ))) of IAD-II, it might be up to which of T (x|τ) and T (x̃|τ) is more informative in adversarial distillation from the diverse teachers. The natural accuracy of IAD sometimes is lower than others, but the performance drop is not very significant compared to IAD-II.
Experiment Setup. In this part, we evaluate these methods by the model with a larger capacity, i.e, WideResNet-34-10. The teacher network is trained by AT and TRADES, following the settings of (Zhang et al., 2021). We keep the most settings of baselines same as the previous experiment. For IAD-I and IAD-II, we set 5/10 epochs warming-up period on CIFAR-10/CIFAR-100.
Results. The results is summarized in Tables 3. Similarly, on CIFAR-10 and CIFAR-100, our method can also achieve better model robustness than ARD, AKD2 and the original teacher models. Moreover, our IAD methods do not sacrifice much standard performance compared with the original teacher models. Since AKD2 externally utilizes a standard pre-trained teacher model, IAD-II can achieve the consistently better natural performance than IAD-I. However, in terms of the robustness, IAD-I generally achieves the comparable or even better than IAD-II under both AT and TRADES.
5.2 EVALUATION ON Tiny-ImageNet DATASET
Experiment Setup. In this part, we evaluate these methods on a more challenging Tiny-ImageNet dataset. For these adversarially pre-trained models, we follow the settings of (Chen et al., 2021) to train AT and TRADES. To be specific, we train PreActive-ResNet-18 using SGD with 0.9 momentum
for 100 epochs. The initial learning rate is 0.1 divided by 10 at Epoch 50 and 80 respectively, and the weight decay=0.0005. For distillation baselines, we keep most settings the same as Section 5.1. For ARD and IAD-I, here we set its λ = 0.9 to deal with the complex task following (Goldblum et al., 2020). And for both IAD-I and IAD-II, we use λ = 0.1, β = 0.1 and 10 warming-up epochs.
Results. We report the results in Table 4. Overall, our IAD-I or IAD-II can still achieve better model robustness than other methods. Specifically, on Tiny-ImageNet, IAD-II can approximately improve both the natural accuracy and the robust accuracy compared to IAD-I and other baselines.
5.3 ABLATION STUDIES
To give a comprehensive understanding of our proposed IAD method, we have conducted a series of experiments (in Appendix A), including the ablation study on using different γ related to student introspection, different τ related to adversarial distillation and the loss terms of IAD-II as well as the comparison of the computational cost. In the following, we only study the effect of β and warming-up periods for our student introspection. More ablation study can be found in the Appendix.
Experiment Setup. To understand the effects of different β and different warming-up periods on CIFAR-10 dataset, we conduct the ablation study in this part. Specifically, we choose the ResNet-18 as the backbone model, and keep the experimental settings the same as Section 5.1. In the first experiments, we set no warming-up period and study the effect of using different β. Then, in the second experiments, we set β = 0.1 and investigate different warming-up periods.
Results. We report part of the results on IAD-I method in Figure 4. The complete results with other evaluation metrics as well as that on IAD-II method can be found in Appendix A.1. In Figure 4, we first visualize the values of the α using different β in the left panel, which shows the proportion of the teacher guidance and student introspection in adversarial distillation. The bigger the beta corresponds to a larger proportion of the student introspection. In the middle panel, we plot the natural and PGD-20 accuracy of the student models distilled by different β. We note that the PGD-20 accuracy is improved when the student model trusts itself more with the larger β value. However, the natural accuracy is decreasing along with the increasing of the β value. Similarly, we adjust the length of warming-up periods and check the natural and PGD-20 accuracy in the right panel of Figure 4. We find that setting the student model partially trust itself at the beginning of the training process leads to inadequate robustness improvements. An appropriate warming-up period at the early stage can improve the student model performance on the adversarial examples.
6 CONCLUSION
In this paper, we study distillation from adversarially pre-trained models. We take a closer look at adversarial distillation and discover that the guidance of teacher model is progressively unreliable by considering the robustness. Hence, we explore the construction of reliable guidance in adversarial distillation and propose a method for distillation from unreliable teacher models, i.e., Introspective Adversarial Distillation. Our methods encourages the student model partially instead of fully trust the guidance of the teacher model and gradually trust its self-introspection more to improve robustness.
7 ACKNOWLEDGEMENT
JNZ and BH were supported by the RGC Early Career Scheme No. 22200720, NSFC Young Scientists Fund No. 62006202 and HKBU CSD Departmental Incentive Grant. JCY and HXY was supported by NSFC No. U20A20222. JFZ was supported by JST, ACT-X Grant Number JPMJAX21AF. TLL was supported by Australian Research Council Projects DE-190101473 and DP-220102121. JLX was supported by RGC Grant C6030-18GF.
8 ETHICS STATEMENT
This paper does not raise any ethics concerns. This study does not involve any human subjects, practices to data set releases, potentially harmful insights, methodologies and applications, potential conflicts of interest and sponsorship, discrimination/bias/fairness concerns, privacy and security issues, legal compliance, and research integrity issues.
9 REPRODUCIBILITY STATEMENT
To ensure the reproducibility of experimental results, our code is available at https://github. com/ZFancy/IAD.
A COMPREHENSIVE EXPERIMENTS
A.1 COMPLETE RESULTS OF ABLATION STUDIES.
In this section, we report the complete results of our ablation studies in Tables 5 (about β) and 6 (about warming-up periods).
In Table 5, we can see that the natural and FGSM accuracy will decrease, and the robust accuracy (PGD-20, CW∞, AA) will increase with the rise of β. In Table 6, we adjust the length of warming-up periods. We can see that letting the student network partially trust itself at the beginning of the training process would result in inadequate robustness improvements. In summary, we can find that there is a trade-off in Table 5 to achieve both larger natural accuracy and larger robustness in adversarial distillation, which is also similar to the standard adversarial training. We can slightly sacrifice the robustness (adjust the β or adjust the warming-up periods) to acquire a better natural accuracy and FGSM.
A.2 FULLY TRUST OR PARTIALLY TRUST THE TEACHER GUIDANCE
In this section, we compare the method with fully trusted or partially trusted teacher guidance. Regarding the variant of AKD2 that replaces the second term in AKD2 by our ”partially trust“ KL term (but without the introspection term), we find the robust improvement in Table 7.
In the ”partially trust” variant of AKD2, we just down weight the part of teacher guidance which has wrong prediction results with the hard labels. The results show that fit this part ”unreliable” teacher guidance may improve the natural performance but consistently drop the robust accuracy.
In the following, we also conduct a range of experiments to compare IAD-I and its variant IAD-IDown that applys our downweighting on `kl(S(x̃|τ)||T (x|τ)) rather than a constant 1.0 like ARD. According to the results, we can see that IAD-I consistently achieves better robustness while sacrifices a little natural accuracy. Hence, we will choose the constant 1.0 on `kl(S(x̃|τ)||T (x|τ)) for ARD in the main part of our experiments on those benchmark datasets.
A.3 THE EFFECTS OF γ FOR STUDENT INTROSPECTION
In this section, we check the effects of the Student Introspection via adjusting the γ for our IAD methods. Here, we also use the constant γ in the experiments. We find that using the constant γ can further boost the performance on model robustness, but there is another problem that the natural accuracy will sacrifice a little bit more than our previous dynamical design, i.e., γ = 1− α.
According to the results in Table 9, we can hardly to find an optimal coefficient for the student introspection term to achieve both the best natural accuracy and the best robustness. However,
there is one trend that increasing the coefficient will gain more robustness with losing more natural accuracy. With the hyper-parameter γ, we may flexibly instantiated by some constants or some strategic schedules to pursue the robustness or the natural accuracy.
A.4 THE EFFECTS OF τ FOR ADVERSARIAL DISTILLATION
In this section, we check the effects of τ for adversarial distillation under the same training configurations and also for TRADES. We have listed the results of TRADES in Tabel 10, and the results of IAD-I and ARD in Table 11.
As for TRADES, we think it may not need the temperature τ which is especially designed for distillation, since the KL term in TRADES is aims to encourage the output of the model on adversarial data to being the similar with that on natural data. Intuitively, it encourage the output to be more stable which leads to better robust performance. As a result, based on the test accuracy in Table 10, τ = 1 maybe the best for TRADES to achieve a better robustness and enlarging (or decreasing) τ can disturb the original output logits and may result in lower robustness (i.e., τ = 0.1: PGD-20 43.26% ).
According to the comparison in Table 11, with the proper τ (e.g., τ ≥ 1), IAD-I can achieve the comparable natural accuracy and better robustness. Note that in the experiments of IAD-I, we keep the τ = 1 for the student introspection term according to the results of TRADES in Table 10.
A.5 COMPARISON WITH TRADES AND ARD
In this section, we compared the performance of TRADES, ARD and our IAD-I with different weights in their each loss terms. We summarized the results in Table 12.
According to the results, IAD-I can approximately achieve the better robust accuracy than ARD, and the better natural accuracy than TRADES with less drop on PGD-20 accuracy. With varying the
hyper-parameters on the loss terms, we also find that natural accuracy can reach to 85.94% at a loss of robustness to 48.58% and robustness can reach to 55.89% at a loss of natural accuracy to 74.95%. In summary, through adjust those weight, our IAD-I can flexibly make a good balance among them by considering both the natural accuracy and the robustness.
A.6 ABLATION STUDY ON THE LOSS TERMS OF IAD-II
In this section, we conduct the ablation study in Table 13 about the four loss terms in IAD-II which is based on AKD2.
Specifically, according to the results, we find `KL(S(x̃|τ)||Tst(x̃|τ)) can help model gain more natural accuracy, while `KL(S(x̃|τ)||Tat(x̃|τ)) can help model gain more robustness. AKD2 achieves a good balance between two aspects by combining above two terms, and IAD-II further boosts the model robustness by incorporating the self-introspection term with less drop in natural performance.
A.7 COMPUTATIONAL COST COMPARISON
In this section, we check and compare the computational cost of ARD, AKD2, IAD-I and IAD-II in terms of the averaged training time in each epoch, as well as the GPU Memory-Usage. The detailed results are summarized in Table 14.
According to the results, IAD-I and IAD-II correspondingly consume a bit more time and memory than ARD and AKD2 due to the additional self-introspection. Besides, an interesting phenomenon is that ARD has less terms than AKD2, but consumes more time and GPU memory. This is because ARD has to deal with both x and x̃, while AKD2 only needs to deal with x̃. | 1. What is the main contribution of the paper regarding adversarial training and distillation?
2. What are the strengths of the proposed approach, particularly in its novelty and significance to the field?
3. Do you have any concerns or questions about the research direction and its practicality?
4. Can you explain the failure case of previous papers and how the proposed framework addresses this issue?
5. How does the proposed method utilize the knowledge from an adversarially pre-trained teacher model effectively?
6. What are the limitations of the paper, such as the performance drop in certain situations?
7. How does the parameter \tau affect the performance of the proposed method and its comparison to other methods?
8. Are there any additional experiments or analyses that could be conducted to further support the claims made in the paper?
9. How does the proposed method compare to other state-of-the-art methods in terms of computational cost and efficiency?
10. Can you provide more insights into the results presented in Figure 1(b) and their implications for the proposed method? | Summary Of The Paper
Review | Summary Of The Paper
Adversarial training has been developed for many years, and it aims to provide reliable classifiers in the era of deep learning. However, in many ML-driven scenarios, users might not want to retrain their big deep networks using adversarial training. The main reason is that training a reliable/adversarial-robust classifier will cost many computational resources. To address this issue, some methods are proposed to distil the robustness from adversarially pre-trained models, which is more practical than the direct adversarial training. This is also the point this paper focuses on. In my humble opinion, this is a very promising research direction and should be paid more attentions in future. Compared to existed works, this paper argues that the adversarial training data of the student model and that of the teacher model are egocentric (respectively generated by themselves) and becoming more adversarial challenging during training, which causes failure of existing works. To address this issue, this paper proposes an Introspective Adversarial Distillation (IAD) to effectively utilize the knowledge from an adversarially pre-trained teacher model. Extensive experiments are conducted on CIFAR-10/CIFAR-100 and the more challenging Tiny-ImageNet datasets to evaluate the efficiency of our IAD.
Review
Pros:
The research direction is promising and more practical than adversarial training in the real world. As I demonstrated in the summary part, in many ML-driven scenarios, users might not want to retrain their big deep networks using adversarial training due to huge computational cost. Thus, it is, IMHO, urgent to consider to distil the robustness of a robust model.
Although the high-level idea is mentioned in previous papers, this paper considers this problem in detail and find a failure case of previous papers. By considering this failure case, this paper proposes a novel framework to complete this task, which is novel and significant to the development of the field.
This paper is well-written and easy to follow. Experiments are enough to support the claims made in this paper. A plus should be that TRADES and AT are both considered in the experiments, which verifies that the proposed framework is suitable for existing adversarial training methods.
Cons:
Since the student model is different from the teacher model, it is expected that adversarial data generated by the student model is different from the data used to train the teacher model. This kind of performance drop also happens in other fields, such as domain adaptation, even adversarial machine learning itself. Thus, it is not clear what the point of Figure 1(a) is. I am not sure if 1(a) is necessary. It is more like the transferability of adversarial attacks.
Based on the above drawback, it is also unclear if there are other points that make the DISTILLATION fails (except for the distributional discrepancy).
The formal problem setting is missing. It is better to provide the formal problem setting to make readers understand the problem clearly.
What is the difference between S(\circ|\tau) and S(\circ)? I did not see any definition of S(\circ|\tau). How does \tau effect on S(\circ|\tau)?
I would like to see how \tau effects the performance of the proposed method. If we remove \tau, then the “Student Introspection” term is actually from TRADES. Meanwhile, how does \tau effect the performance of TRADES? Are there any papers discussing this point?
From the experimental results, IAD performs much better than ARD (IMHO, the most direct baseline to IAD), which is very good. I would like to see how \tau effects the performance of IAD compared to ARD (giving the same \tau for ARD and IAD).
The computational-cost comparison is missing, which is important for distillation-based method.
Figure 1(b) is a clear figure, but I cannot see the advantages of your method in this figure. In other words, it is unclear why your method can improve the performance when reading 1(b). |
ICLR | Title
Rewiring with Positional Encodings for GNNs
Abstract
Several recent works use positional encodings to extend the receptive fields of graph neural network (GNN) layers equipped with attention mechanisms. These techniques, however, extend receptive fields to the complete graph, at substantial computational cost and risking a change in the inductive biases of conventional GNNs, or require complex architecture adjustments. As a conservative alternative, we use positional encodings to expand receptive fields to r-hop neighborhoods. More specifically, our method augments the input graph with additional nodes/edges and uses positional encodings as node and/or edge features. We thus modify graphs before inputting them to a downstream GNN model, instead of modifying the model itself. This makes our method model-agnostic, i.e. compatible with any existing GNN architectures. We also provide examples of positional encodings that are lossless with a one-to-one map between the original and the modified graphs. We demonstrate that extending receptive fields via positional encodings and a virtual fully-connected node significantly improves GNN performance and alleviates over-squashing using small r. We obtain improvements on a variety of models and datasets, and reach state-of-the-art performance using traditional GNNs or graph Transformers.
1 INTRODUCTION
GNN layers typically embed each node of a graph as a function of its neighbors’ (1-ring’s) embeddings from the previous layer; that is, the receptive field of each node is its 1-hop neighborhood. Hence, at least r stacked GNN layers are needed for nodes to get information about their r-hop neighborhoods. Barceló et al. (2020) and Alon and Yahav (2021) identify two broad limitations associated with this structure: under-reaching occurs when the number of layers is insufficient to communicate information between distant vertices, while over-squashing occurs when certain edges act as bottlenecks for information flow.
Inspired by the success of the Transformer in natural language processing (Vaswani et al., 2017), recent methods expand node receptive fields to the whole graph (Dwivedi and Bresson, 2021; Ying et al., 2021). Since they effectively replace the topology of the graph with that of a complete graph, these works propose positional encodings that communicate the connectivity of the input graph as node or edge features. As these methods operate on fully-connected graphs, the computational cost of each layer is quadratic in the number of nodes, obliterating the sparsity afforded by conventional 1-ring based architectures. Moreover, the success of the 1-ring GNNs suggests that local feature aggregation is a useful inductive bias, which has to be learned when the receptive field is the whole graph, leading to slow and sensitive training.
In this paper, we expand receptive fields from 1-ring neighborhoods to r-ring neighborhoods, where r ranges from 1 (typical GNNs) to R, the diameter of the graph (fully-connected). That is, we augment a graph with edges between each node and all others within distance r in the input topology. We show that performance is significantly improved using fairly small r and carefully-chosen positional encodings annotating this augmented graph. This simple but effective approach can be combined with any GNN.
Contributions. We apply GNN architectures to augmented graphs connecting vertices to their peers of distance ≤ r. Our contributions are as follows: (i) We increase receptive fields using a modified graph with positional encodings as edge and node features. (ii) We compare r-hop positional encodings on the augmented graph, specifically lengths of shortest paths, spectral computations, and
powers of the graph adjacency matrix. (iii) We demonstrate that relatively small r-hop neighborhoods sufficiently increase performance across models and that performance degrades in the fullyconnected setting.
2 RELATED WORK
The Transformer has permeated deep learning (Vaswani et al., 2017), with state-of-the-art performance in NLP (Devlin et al., 2018), vision (Parmar et al., 2018), and genomics (Zaheer et al., 2020). Its core components include multi-head attention, an expanded receptive field, positional encodings, and a CLS-token (virtual global source and sink nodes). Several works adapt these constructions to GNNs. For example, the Graph Attention Network (GAT) performs attention over the neighborhood of each node, but does not generalize multi-head attention using positional encodings (Veličković et al., 2018). Recent works use Laplacian spectra, node degrees, and shortest-path lengths as positional encodings to expand attention to all nodes (Kreuzer et al., 2021; Dwivedi and Bresson, 2021; Rong et al., 2020; Ying et al., 2021). Several works also adapt attention mechanisms to GNNs (Yun et al., 2019; Cai and Lam, 2019; Hu et al., 2020; Baek et al., 2021; Veličković et al., 2018; Wang et al., 2021b; Zhang et al., 2020; Shi et al., 2021).
Path and distance information has been incorporated into GNNs more generally. Yang et al. (2019) introduce the Shortest Path Graph Attention Network (SPAGAN), whose layers incorporate pathbased attention via shortest paths between a center node and distant neighbors, using an involved hierarchical path aggregation method to aggregate a feature for each node. Like us, SPAGAN introduces the ≤ k-hop neighbors around the center node as a hyperparameter; their model, however, has hyperparameters controlling path sampling. Beyond SPAGAN, Chen et al. (2019) concatenate node features, edge features, distances, and ring flags to compute attention probabilities. Li et al. (2020) show that distance encodings (i.e., one-hot feature of distance as an extra node attribute) obtain more expressive power than the 1-Weisfeiler-Lehman test. Graph-BERT introduces multiple positional encodings to apply Transformers to graphs and operates on sampled subgraphs to handle large graphs (Zhang et al., 2020). Yang et al. (2019) introduce the Graph Transformer Network (GTN) for learning a new graph structure, which identifies “meta-paths” and multi-hop connections to learn node representations. Wang et al. (2021a) introduce Multi-hop Attention Graph Neural Network (MAGNA) that uses diffusion to extend attention to multi-hop connections. Frankel et al. (2021) extend GAT attention to a stochastically-sampled neighborhood of neighbors within 5-hops of the central node. Isufi et al. (2020) introduce EdgeNets, which enable flexible multi-hop diffusion. Luan et al. (2019) generalizes spectral graph convolution and GCN in block Krylov subspace forms.
Each layer of our GNN attends to the r-hop neighborhood around each node. Unlike SPAGAN and Graph-BERT, our method is model agnostic and does not perform sampling, avoiding their sampling-ratio and number-of-iterations hyperparameters. Unlike GTN, we do not restrict to a particular graph structure. Broadly, our approach does not require architecture or optimization changes. Thus, our work also joins a trend of decoupling the input graph from the graph used for information propagation (Veličković, 2022). For scalability, Hamilton et al. (2017) sample from a node’s local neighborhood to generate embeddings and aggregate features, while Zhang et al. (2018) sample to deal with topological noise. Rossi et al. (2020) introduce Scalable Inception Graph Neural Networks (SIGN), which avoid sampling by precomputing convolutional filters. Kipf and Welling (2017) preprocess diffusion on graphs for efficient training. Topping et al. (2021) use graph curvature to rewire graphs and combat over-squashing and bottlenecks.
In contrast, our work does not use diffusion, curvature, or sampling, but expands receptive fields via Transformer-inspired positional encodings. In this sense, we avoid the inductive biases from pre-defined notions of diffusion and curvature, and since we do not remove connectivity, injective lossless changes are easy to obtain.
3 PRELIMINARIES AND DESIGN
Let G = (V,E, fv, fe) denote a graph with nodes V ⊂ N0 and edges E ⊆ V × V , and let G be the set of graphs. For each graph, let functions fv ∶ V → Rdv and fe ∶ E → Rde denote node and edge features, respectively. We consider learning on graphs, specially node classification and graph classification. At inference, the input is a graph G. For node classification, the task is to predict
a node label lv(v) ∈ R for each vertex v ∈ V . Using the node labels, the homophily of a graph is defined as the fraction of edges that connect nodes with the same labels (Ma et al., 2022). For graph classification, the task is to predict a label lG ∈ R for the entire graph G. Given the tasks above, GNN architectures typically ingest a graph G = (V,E, fv, fe) and output either a label or a per-node feature. One can view these as an abstraction; e.g. a GNN for graph classification is a map Fθ ∶ G → Rn with learnable parameters θ. These architectures vary in terms of how they implement Fθ. Some key examples include the following: (i) Spatial models (Kipf and Welling, 2017) use the graph directly, computing node representations in each layer by aggregating representations of a node and its neighbors (1-ring). (ii) Spectral models (Bruna et al., 2014) use the eigendecomposition of the graph Laplacian to perform spectral convolution. (iii) Diffusion models (Wang et al., 2021a; Klicpera et al., 2019) use weighted sums of powers of the adjacency matrix to incorporate larger neighborhoods (r-hops). (iv) In Transformers (Kreuzer et al., 2021; Dwivedi and Bresson, 2021; Rong et al., 2020; Ying et al., 2021), each node forms a new representation by self-attention over the complete graph (R-hop neighborhood) using positional encodings. These approaches incorporate useful inductive biases while remaining flexible enough to learn from data.
Spatial models have been extremely successful, but recent work shows that they struggle with underreaching and over-squashing (Alon and Yahav, 2021). Spectral approaches share similar convolutional bias as spatial models and face related problems (Kipf and Welling, 2017). On the other hand, Transformers with complete attention and diffusion aim to alleviate the shortcomings of spatial models and show promising results. Due to complete attention, Transformers carry little inductive bias but are also computationally expensive. Diffusion explicitly incorporates the inductive bias that distant nodes should be weighted less in message aggregation; limiting its breadth of applicability.
We alleviate under-reaching and over-squashing while avoiding the computational load of complete attention by incorporating a more general proximity bias than diffusion without committing to a specific model. Our method is built on the observation that Fθ can be trained to ingest modified versions of the original graph that better communicate structure and connectivity. Hence, we add new edges, nodes, and features to the input graph. To still convey the original topology of the input graph, we add positional encodings. More formally, we design functions g ∶ G → G that modify graphs and give features to the new nodes and edges. These functions can be prepended to any GNN Fθ ∶ G → Rn as Fθ ○ g ∶ G → Rn. The following are desiderata informing our design of g: (i) ability to capture the original graph, (ii) ability to incorporate long-range connections, (iii) computational efficiency, and (iv) minimal and flexible locality bias. By using positional encodings and maintaining the original graph G as a subgraph of the modified graph, we capture the original graph in our modified input (Section 4.2.1). By expanding the receptive field around each node to r-hop neighborhoods we reduce computational load relative to complete-graph attention, with limited inductive bias stemming from proximity. Additionally, expanded receptive fields alleviate under-reaching and over-squashing (Section 6.1).
4 APPROACH
We modify graphs before inputting them to a downstream GNN model, instead of modifying the model itself. Our approach does not remove edges or nodes in the original graph but only adds elements. Given input G = (V,E, fv, fe), we create a new graph G′ = (V ′,E′, f ′v, f ′e) such that G is a subgraph of G′. Expanded receptive fields are achieved in G′ by adding edges decorated with positional encodings as node or edge attributes; we also add a fully-connected CLS node. G′ is still a graph with node and edge attributes to which we may apply any GNN. This process is represented by a function g ∶ G → G. We decompose the construction of g into topological rewiring and positional encoding, detailed below. In a slight abuse of notation, we will subsequently use G to denote only the subset of graphs relevant to a given machine learning problem. For example, for graph regression on molecules, G denotes molecule graphs, with atoms as nodes and bonds as edges.
4.1 TOPOLOGICAL REWIRING
We modify the input graph G to generate G′ in two steps:
Expanded receptive field. Given a graph G = (V,E, fv, fe) ∈ G and a positive integer r ∈ N+, we add edges between all nodes within r hops of each other in G to create G′r = (V,E′, f ′v, f ′e). If G is annotated with edge features, we assign to each edge in E′/E an appropriate constant feature Ce. CLS node. Following Gilmer et al. (2017), we also include a “CLS”—or classification—node to our graph connected to all others. We follow this procedure: Given a graph G, we (i) initialize a new graph G′ = (V ′,E′, f ′v, f ′e) = G, (ii) add a new node vCLS to V ′, and (iii) set f ′v(vCLS) ∶= Cv for a constant Cv . Finally, we set E′ ∶= E ∪ ⋃v∈V {(vCLS, v), (v, vCLS)}, with f ′e((vCLS, v)) = f ′e((v, vCLS)) ∶= Ce, where Ce is defined above.
4.2 POSITIONAL ENCODINGS
Given only the connectivity of a rewired graph G′r = (V ′,E′, f ′v, f ′e) from the two-step procedure above, it may not be possible to recover the connectivity of the original graph G = (V,E, fv, fe). In the extreme, when r is large and G is connected, G′r could become fully-connected, meaning that all topology is lost—removing the central cue for graph-based learning. To combat this, we encode the original topology of G into G′r via positional encodings, which are node and/or edge features. We consider several positional encoding functions for edges pe ∶ G × V ′ × V ′ → Rn or nodes pv ∶ G × V ′ → Rn, appending the output of pe as edge or pv as node features to G′r. Section 4.2.1 lays out properties to compare choices of pe and/or pv . Then, Section 4.2.2 provides concrete positional encodings compared in our experiments that trade off between the properties we lay out.
4.2.1 PROPERTIES OF POSITIONAL ENCODINGS
There are countless ways to encode the subgraph topology of G within G′ in vertex features pv or edge features pe. Below, we state a few properties we can check to give a framework for comparing the capabilities and biases of possible choices.
Lossless encoding. While a GNN can ignore information in input G′, it cannot reconstruct information that has been lost in constructing G′ from G. Yet, there can be benefits in forgetting information, e.g. when dealing with noisy graphs or incorporating a stronger inductive bias (Rossi et al., 2020; Klicpera et al., 2019). That said, a simple property to check for G′ equipped with positional encoding features pe, pv is whether we can recover G from this information, that is, whether our encoding is lossless (or non-invasive). As long as it is possible to identify G within g(G), g is an injection and non-invasive. Hence, a sufficient condition for lossless positional encodings is as follows: If all edges in G′ have unique positional encodings, then g ∶ G → G is a bijection. One way to achieve this condition is to use an additional edge feature that is unique to the 1-ring.
Discriminative power. Following work investigating the discriminative power of GNNs (Xu et al., 2019; Brüel Gabrielsson, 2020), Ying et al. (2021) showed that expanded receptive fields together with shortest-path positional encodings are strictly more powerful than the 1-Weisfeiler-Lehman (WL) test and hence more powerful than 1-hop vanilla spatial GNN models (Xu et al., 2019). The combination of increased receptive fields, positional encodings, and choice of subsequent GNN models determines discriminative power. In fact, it follows from (Ying et al., 2021) that the positional encodings presented below together with an increased receptive field r > 1 and a vanilla spatial GNN model are strictly more powerful than the 1-WL test.
Computational time. Positional encodings may come at substantial computational cost when working with r-hop neighborhoods. The cost of computing positional encodings affects total inference time, which may be relevant in some learning settings. However, in our setting the runtime of computing positional encodings is an order of magnitude less than the subsequent inference time, and in our implementation the asymptotic runtimes of computing the positional encodings are the same. See Appendix E.
Local vs. global. The positional encoding of a vertex or edge can be local, meaning it incorporates information from a limited-sized neighborhood in G, or global, in which case adding or removing a node anywhere in G could affect all the positional encodings.
Inductive bias. Our positional encodings can bias the results of the learning procedure, effectively communicating to the downstream GNN which properties of G and G′ are particularly important for learning. Without positional encodings, our model would induce a bias stating that distances < r in our graph are insignificant. More subtly, suppose ℓ is the distance (of length ≤ r) between two
nodes in G corresponding to a new edge in E′. Using ℓ directly as positional encoding rather than a decaying function, e.g. e−αℓ, makes it easier or harder (resp.) to distinguish long distances in G.
A related consideration involves whether our model can imitate the inductive bias of past work. For example, graph diffusion has been used to incorporate multi-hop connections into GNNs using fixed weights (Wang et al., 2021a). We can ask whether our positional encodings on G′ are sufficient to learn to imitate the behavior of a prescribed multi-hop model on G, e.g. whether a layer of our GNN applied to G′ can capture multi-hop diffusion along G.
Over-squashing and under-reaching. Section 6.1 demonstrates, via the NeighborsMatch problem (Alon and Yahav, 2021), that increased receptive fields as well as the CLS-node alleviate oversquashing; however, this toy problem is concerned with matching node attributes and not with graph topology. We want positional encodings that alleviate over-squashing in the sense that it enables effective information propagation for the task at hand. Our experiments showing that expanded receptive fields alleviate the over-squashing problem and that the best performing positional encoding varies across datasets showcase this. Additionally, our experiments on the discriminative power of positional encodings in Appendix D further help discern the different options.
4.2.2 POSITIONAL ENCODING OPTIONS
Taking the properties above into consideration, we now give a few options for positional encodings below, compared empirically in Section 6.
Shortest path. For any edge e ∈ G′r, the shortest-path positional encoding takes pe ∈ {0,1, . . . , r} to be the integer length of the shortest path in G between the corresponding nodes of E. These embeddings are lossless because G is the subgraph of g(G) with pe = 1. They also are free to compute given our construction of G′r from G. But, multiple vertices in the r-neighborhood of a vertex in V could have the same positional encoding in V ′, and shortest path lengths are insufficient to capture complex inductive biases of multi-hop GNNs like diffusion over large neighborhoods. Shortest-path positional encoding was previously used by Ying et al. (2021), for extending G to a fully-connected graph, but they did not consider smaller r values.
Spectral embedding. Laplacian eigenvectors embed graph vertices into Euclidean space, providing per-vertex features that capture multi-scale graph structure. They are defined by factorizing the graph Laplacian matrix, ∆ = I−D−1/2AD−1/2, where D is the degree matrix and A is the adjacency matrix. We call the result a spectral positional embedding. We can use the q smallest non-trivial Laplacian eigenvectors of G as a node-based positional encoding pv ∶ V ′ → Rq . Following Dwivedi et al. (2020), since these eigenvectors are known only up to a sign, we randomly flip the sign during training. Prior work consider Laplacian eigenvectors as additional node features without topological rewiring (Dwivedi et al., 2020).
Spectral positional encodings do not necessarily make g injective. Even when q = ∣V ∣, this encoding fails to distinguish isospectral graphs (Von Collatz and Sinogowitz, 1957), but these are rarely encountered in practice. On the other hand, spectral signatures are common for graph matching and other tasks. Moreover, unlike the remaining features in this section, spectral positional encodings capture global information about G rather than only r-neighborhoods. Finally, we note that the diffusion equation for graphs can be written as ut = −∆u; this graph PDE can be solved in closed-form given the eigenvectors and eigenvalues of ∆. Hence, given the spectral embedding of G in G′, we can simulate diffusion-based multi-hop GNN architectures up to spectral truncation.
Powers of the adjacency matrix. Our final option for positional encoding generalizes the shortest path encoding and can capture the inductive biases of diffusion-based GNNs. The entry at position (i, j) of the k-th power Ak of the adjacency matrix A of graph G gives the number of paths of length k between node i and j in G. Concatenating the powers from k = 1, . . . , r, we get for each edge e in G′ an integer vector pe ∈ Nr giving the powers of the adjacency matrix positional encoding. This embedding can be used to recover the shortest-path embedding. This adjacency-derived embedding can also generalize the inductive bias of diffusion-based multi-hops GNNs. In particular, diffusion aggregation weights are often approximated using a Taylor series, W = ∑∞i=0 θiAi ≈ ∑ri=0 θiAi ∶= W , where θi are a prescribed decaying sequence (θi > θi+1). The entries of W above can be computed linearly from the adjacency-powers positional encoding. Hence, it is strictly more general than using prescribed diffusion-based aggregation weights on G.
Lossless encodings. The previously discussed lossless-encoding properties of our graph rewiring method are accomplished by two of the above-mentioned positional encodings:
Proposition 1. Shortest-path and adjacency matrix positional encodings yield lossless rewirings.
Proof. Recovering the original graph G = (V,E) from the rewired graph G′ = (V,E′) is almost trivial. With the shortest-path position encoding the original graph can be recovered via E = {e∣e ∈ E′, pe = 1} and for powers-of-the-adjacency-matrix encodings via E = {e∣e ∈ E′, (pe)1 = 1}.
5 IMPLEMENTATION DETAILS
Our method is compatible with most GNN architectures. Here we adopt GatedGCN (Bresson and Laurent, 2018), MoNet (Monti et al., 2017), and an implementation of the Transformer (Vaswani et al., 2017); see Appendix B for details. For each model, we consider graph rewiring with a different r-hop receptive field around each node, and compare with and without the CLS-node, as well as the three positional encodings introduced in Section 4.2.2.
Input and readout layers. Typically, GNNs on a graph G = (V,E, fv, fe) first embed node features fv and edge features fe through a small feed-forward network (FFN) input layer. When incorporating positional encodings per edge/node, we embed using a small FFN and add them at this input layer. After this layer, it updates node and edge representations through successive applications of GNN layers. Lastly, a readout layer is applied to the last GNN layer L. For node classification, it is typically a FFN applied to each node feature hLi . For graph classification, it is typically an FFN applied to the mean or sum aggregation of all node features hL. For graph classification and when using the CLS-node, we aggregate by applying the FFN to the CLS-node’s features in the last layer.
6 EXPERIMENTS
We evaluate performance on six benchmark graph datasets: ZINC, AQSOL, PATTERN, CLUSTER, MNIST, and CIFAR10 from (Dwivedi et al., 2020). The benchmark includes a training time limit of 12 hours; we use similar compute to their work via a single TeslaV100 GPU. Training also stops if for a certain number of epochs the validation loss does not improve (Dwivedi et al., 2020). Thus, our experiments consider the ease of training and efficient use of compute. For the first two datasets, we run GatedGCN, MoNet, and Transformer to show that rewiring and positional encoding work for different models; for the other datasets we run only GatedGCN to focus on the effects of receptive field size, the CLS node, and positional encodings. For all datasets, we run with increasing receptive fields, with different positional encodings, and with or without the CLS-node. In the tables, density is the average of the densities (defined as the ratio ∣E∣/∣V ∣2) of each graph in the dataset rewired to the respective receptive field size. See Appendix A for details.
Table 1 compares our best results with other top performing methods and models. All our top performing models come from the GatedGCN, although the Transformer performs comparably; however, the Transformer was harder to train—see Appendix B. MoNet performs worse but still sees significant improvements from our approach. Our GatedGCN implementation was taken from the same work (Dwivedi et al., 2020) that introduced the benchmarks and code that we use. Thus, hyperparameters might be better adapted to the GatedGCN. This highlights the benefits of our modelagnostic approach, which allows us to pick the best models from Dwivedi et al. (2020) and combine them with our methods. Our approach with 100K parameters achieves state-of-the-art on all datasets among models with 100K parameters and even outperforms 500K-parameter models.
ZINC, Graph Regression. ZINC consists of molecular graphs and the task is graph property regression for constrained solubility. Each ZINC molecule is represented as a graph of atoms with nodes and bonds as edges. In Table 2 we present results for r from 1 to 10. The density column shows that these graphs are sparse and that the number of edges increases almost linearly as the receptive field r is increased. Performance across all settings noticeably improves when increasing r above 1. Top performance is achieved with the CLS-node and powers-of-the-adjacency positional encoding at r = 4, and at 52% of the edges and compute compared to complete attention. When using the CLS node and/or spectral positional encodings, top performance generally occurs at lower r, which is likely due to the global nature of these changes to the graphs. The GatedGCN and
Transformer perform comparably for the same settings, with a slight edge to the GatedGCN. The two models show the same performance trends between settings, i.e., both increased receptive fields and the CLS-node boost performance. Further, Ying et al. (2021) include a performance of 0.123 on ZINC with their Graphormer(500K), i.e., a Transformer with positional encodings and complete attention. However, their training is capped at 10,000 epochs while ours is capped at 1,000 epochs; training their Graphormer(500K) with same restrictions leads to a score of 0.26 on ZINC.
AQSOL, Graph Regression. AQSOL consists of the same types of molecular graphs as ZINC. The densities of AQSOL graphs are slightly higher than those of ZINC. For all settings not including CLS-node or spectral positional encodings, performance improves significantly when increasing r above 1 (see Table 3); in these settings, better performing r are larger than for ZINC. However, when including CLS node or spectral positional encodings, performance changes much less across different r. This indicates the importance of some form of global bias on this dataset. At least one of larger r values, spectral positional encoding, or the CLS-token is required to provide the global bias, but the effect of them differs slightly across the two models. GatedGCN performs significantly better, and larger r-values still boosts performance when combined with the CLS-token for MoNet, but not for GatedGCN. MoNet uses a Bayesian Gaussian Mixture Model (Dempster et al., 1977) and since MoNet was not constructed with edge-features in mind, we simply add edge embeddings to the attention coefficients. Not surprisingly, this points to the importance of including edge features for optimal use of expanded receptive fields and positional encodings.
CLUSTER, Node Classification. CLUSTER is a node classification dataset generated using a stochastic block model (SBM). The task is to assign a cluster label to each node. There is a total of 6 cluster labels and the average homophily is 0.34. CLUSTER graphs do not have edge features. Table 4 gives results for r-hop neighborhoods from 1 to 3. As can be seen in the density column, at r = 3 all graphs are fully connected, and more than 99% of them are fully connected at r = 2. Hence, these graphs are dense. Significant improvements are achieved by increasing r for all but the spectral positional encoding (again showcasing its global properties), which together with the CLS node perform competitively at r = 1. The CLS node is helpful overall, especially at r = 1. The GatedGCN and Transformer perform comparably for all but the spectral positional encodings where the Transformer breaks down. We found that this breakdown was due to the placement of batch normalization, discussed in Appendix B.1.
PATTERN, Node Classification. The PATTERN dataset is also generated using a SBM model, and has an average homophily of 0.66. The task is to classify the nodes into two communities and graphs have no edge features. Table 5 shows results for r-hops from 1 to 3. Similarly to CLUSTER, the density column shows that the graphs are dense. Significant improvements are achieved by increasing r > 1 and/or using the CLS-node. Performance generally decreases at r = 3. Similarly to CLUSTER, the CLS-node helps at r = 1, but for both CLUSTER and PATTERN, top performing model comes from a larger r > 1 without the CLS-node, suggesting that trade-offs exist between CLS-node and increased receptive fields. Compared to CLUSTER, our approach shows less performance boost for PATTERN, which lead us to hypothesize that our approach is more helpful for graphs with low homophily which we investigate further in Appendix F.
MNIST, Graph Classification. MNIST is an image classification dataset converted into super-pixel graphs, where each node’s feature includes super-pixel coordinates and intensity. The images are of handwritten digits, and the task is to classify the digit. Table 6 summarizes results for r from 1 to 3. Not all graphs are fully connected at r = 3, but training at r = 4 exceeds our memory limit. Noticeable performance gains are achieved at r = 2, but performance generally decreases at r = 3. The CLS-node consistently improves performance at r = 1 but not otherwise, indicating that the CLS-node and increased r-size have subsumed effects.
CIFAR10, Graph Classification. CIFAR10 is an image classification dataset converted into superpixel graphs, where each node’s features are the super-pixel coordinates and intensity. The images consist of ten natural motifs, and the task is to classify the motif, e.g., dog, ship, or airplane. Table 7 provides results for r from 1 to 3. Not all graphs are fully connected at r = 3, but training at r = 4 led to out-of-memory issues. Top performing versions are all at r = 1, and performance degrades for r > 1. As with MNIST, the CLS-node only improves performance at r = 1, again indicating its shared (subsumed) effects with increased r-sizes.
6.1 NEIGHBORSMATCH, OVER-SQUASHING
Alon and Yahav (2021) introduce a toy problem called NeighborsMatch to benchmark the extent of over-squashing in GNNs, while controlling over-squashing by limiting the problem radius rp. The graphs in the dataset are binary trees of depth equal to the problem radius rp. Thus, the graphs are
structured and sparse, and the number of edges grows linearly with the increased receptive field r. See Figure 1, Appendix C, for results with GatedGCN. Increasing the receptive field r with a step of 1 increases the attainable problem radius with a step of 1, while using the CLS-node at r = 1 falls in between the performance of r = 2 and r = 3 but with a much longer tail. Thus, this further showcases the subsumed as well as different effect (complementary and conflicting) the receptive field and the CLS-node have, as also observed on the other benchmarks.
6.2 COMPUTATIONAL ANALYSIS
For all positional encodings, the number of edges determines the asymptotic runtime and memory use. The CLS-node only introduces an additive factor. Figures 4 and 5 in Appendix E show that the runtime in practice scales roughly the same as the density, as the receptive field size is increased; though real runtime has a significant constant factor.
6.3 SELECTING POSITIONAL ENCODING AND HOPS SIZE
We recommend the adjacency positional encodings together with the CLS-node. In terms of ranked performance across the 6 datasets, adjacency- and spectral positional encodings perform the same, but the spectral encoding performs considerably worse on the ZINC dataset, while the differences are smaller on the other datasets. Additional experiments in Appendix D, Figure 2, assess the discriminative power of the different encodings. However, there is no positional encoding superior in all aspects. Instead, each one has unique benefits as well as drawbacks. This is made apparent by considering r as a parameter and observing the performance differences across values of r. Furthermore, the CLS-node is part of the best-performing configuration more often than not. Similarly, no fixed r is optimal for all datasets. Instead, optimal r depends on the dataset and the amount of compute. Appendix F shows that increased r diminishes the reliance on homophily as an inductive bias, and thus low homophily of a dataset could be used as an indicator for selecting an increased r. If the density does not change much from a change in r then neither does performance. The use of the spectral positional encodings, the CLS-node, or increased r have subsuming effects for multiple datasets; here the CLS-node or spectral positional encodings may be preferred, computationally cheaper, alternatives to increasing r.
From this empirical study, for picking optimal r, we recommend computing the densities for increasing r and picking the first one where the average density exceeds 0.5 to reap most of the performance boosts. This seems to maintain a helpful locality bias as well as to significantly reduce the compute compared to complete attention. See Appendix G for further discussion.
7 DISCUSSION
Our simple graph rewiring and positional encodings achieve state-of-the-art performance, widening receptive fields while alleviating over-squashing. This is much due to the ability to easily apply our method to models that stem from a large body of work on GNNs, highlighting the benefits of our model-agnostic approach.
The reality is that attention with complete receptive fields is still computationally intractable for most practitioners and researchers. However, here we show that significant performance boosts via attention and increased receptive fields can be obtained by increasing the receptive field only slightly. Thus, opening up recent work to a broader range of practitioners as well as giving more fair conditions for comparing GNNs. In addition, the systematic investigation of increased receptive fields and positional encodings gives further insights into the necessity of homophily for the success of GNNs and highlights other implicit biases in GNN architectures.
A TRAINING DETAILS
Both code and training follow Dwivedi et al. (2020) closely, and to a lesser extent (Dwivedi and Bresson, 2021), which uses the same code base.
Like (Dwivedi et al., 2020), we use the Adam optimizer (Kingma and Ba, 2015) with the same learning rate decay strategy. The initial learning rate is set to 10−3 and is reduced by half if the validation loss does not improve after a fixed (”lr schedule patience”) number of epochs, either 5 or 10. Instead of setting a maximum number of epochs, the training is stopped either when the learning rate has reached 10−6 or when the computational time reaches 12 hours (6 hours for NeighborsMatch). Experiments are run with 4 different seeds; we report summary statistics from the 4 results.
Below we include training settings for the different datasets.
A.1 ZINC
"model": GatedGCN and Transformer, "batch_size": 128, "lr_schedule_patience": 10, "max_time": 12
A.2 AQSOL
"model": GatedGCN and MoNet, "batch_size": 128, "lr_schedule_patience": 10, "max_time": 12
A.3 CLUSTER
"model": GatedGCN and Transformer, "batch_size": 48 (GatedGCN), 32 or 16 (Transformer), "lr_schedule_patience": 5, "max_time": 12
A.4 PATTERN
"model": GatedGCN, "batch_size": 48, "lr_schedule_patience": 5, "max_time": 12
A.5 MNIST
"model": GatedGCN, "batch_size": 128, "lr_schedule_patience": 10, "max_time": 12
A.6 CIFAR10
"model": GatedGCN, "batch_size": 128, "lr_schedule_patience": 10, "max_time": 12
A.7 NEIGHBORSMATCH
"model": GatedGCN,
"batch_size": 256, "lr_schedule_patience": 10, "max_time": 6
B TRANSFORMER IMPLEMENTATION
We implemented a simple version of the Transformer adapted to graphs:
ĥli =BN(hl−1i ) ˆ̂ hli = ∥Hk=1 ( ∑
j∈Ni∪{i} al,ki,jW l kĥ l−1 j ) + hl−1i
hli =FFN(BN( ˆ̂ hli)) + ˆ̂ hli
with êli,j =BN(el−1i,j ) âl,ki,j =((A l kĥ l i)T (Blkĥlj) +Clkêli,j)/d
al,ki,j = exp(âl,ki,j )
∑j′∈Ni∪{i} exp(â l,k i,j′)
eli,j =FFN(êli,j) + el−1i,j
Here, h and e are node and edge features (resp.) from the previous layer. Wk,A,B ∈ Rd/H×d and C ∈ R1×d are learnable weight-matrices, H is the number of attention heads, and BN is short for batch normalization. ∥Hk=1 denotes the concatenation of the attention heads.
B.1 DESIGN CHOICES AND CHALLENGES
There are many variations on the Transformer model. Following Ying et al. (2021), we put the normalization before the multi-head attention, which caused instability when training on CLUSTER with Laplacian (spectral) positional encodings. This was fixed by putting the normalization after or using layer normalization instead of batch normalization; however, these changes reduced performance on ZINC. While the GatedGCN worked well with identical architecture parameters across datasets, we found that the Transformer needed more variations to stay competitive on MNIST and CIFAR10; in particular, fewer layers and larger hidden dimensions.
Transformers use multi-head attention which puts number-of-heads dimension vectors on each edge—seen as directed. Hence, the memory load becomes 2× ∣E∣× num heads (in our experiments, num heads = 6), which compared for GatedGCN is only 2 × ∣E∣. This causes a memory bottleneck for the Transformer that may force one to use a reduced batch size to avoid memory issues.
B.2 OTHER VARIANTS
We implemented other variants, including more involved Transformers. As in (Vaswani et al., 2017), we ran the path-integers through sine and cosine functions of different frequencies, and inspired by (Dai et al., 2019; Ke et al., 2020) we implemented a more involved incorporation of relative positions in the multi-head attention (see below); however, we found performance to be comparable.
In natural language processing, the input is a sequence (a line graph) x = (x1, . . . , xn) of text tokens from a vocabulary set V , with each token having a one-hot-encoding fV ∶ V → [0,1]∣V ∣. The word embeddings E ∈ Rn×d for n tokens are formed as E = (WembedfV(xi) ∣ xi ∈ x) where Wembed ∈ Rd×∣V ∣ is a learnable weight matrix. The original Transformer model used absolute positional encodings. This means that we add the positional encoding to the node embedding at the input layer. Consider a positional encoding function pe ∶ N0 → Rd. Then the first input is
h0 = (WembedfV(xi) + pe(i) ∣ i = 1, . . . , n) = E +U
where U = (pe(i) ∣ i = 0, . . . n) ∈ Rn×d. Typically pe contains sine and cosine functions of different frequencies:
pe(k,2 × l) = sin(k/10000(2×l)/d) pe(k,2 × l + 1) = cos(k/10000(2×l+1)/d)
where k ∈ N is the position and l ∈ N is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from 2π to 10000 × 2π. This function was chosen because it was hypothesized that it would allow the model to easily learn to attend by relative positions, since for any fixed offset m, pe(k +m) is a linear function of pe(k). It was also hypothesized that it may allow the model to extrapolate to sequence lengths longer than the ones encountered during training.
In many cases, absolute positional encodings have been replaced with relative fully learnable positional encodings and relative partially learnable positional encodings (Dai et al., 2019). To justify these, consider the first attention layer with absolute positional encodings:
Aabsi,j = ExiWqWTk ETxj +ExiWqW T k U T j +UiWqWTk ETxj +UiWqW T k U T j
For relative (fully and partially) learnable positional encodings we have instead:
Areli,j = ExiWqWTk,EETxj +ExiWqW T k,RR T i−j + uWTk,EETxj + vW T k,RR T i−j
where u, v ∈ R1×d are learnable weights and Ri−j ∈ R1×d is a relative positional encoding between i and j. Each term has the following intuitive meaning: term (1) represents content-based addressing, term (2) captures a content-dependent positional bias, term (3) governs a global content bias, and (4) encodes a global positional bias.
For relative fully learnable positional encodings, WTk,RR T i−j is a learnable weight in Rd×1 for each i− j ∈ N, while for relative partially learnable positional encodings Ri,j = pe(∣i− j∣) where pe is the sinusoidal function from before.
We implemented both fully and partially learnable positional encodings for the shortest-path positional encodings (integer-valued) and related versions for the other positional encodings (in Rd). We include results in Tables 8 and 9.
C OVER-SQUASHING
Results for over-squashing experiment can be found in Figure 1.
D ADDITIONAL EVALUATION OF POSITIONAL ENCODINGS
Here we provide a start to toy data and a task for comparing positional encodings. In this task we wish to assess how powerful the positional encodings are in practice, i.e. how well they discriminate between different graph isomorphism classes. Specifically, we generate 100 random Erdos graphs and then expand the receptive field so that the graph is fully connected. Thus, the positional encodings become the mere instrument for communicating the connectivity/topology of the original graph. The task is to retrieve a specific graph among all 100 graphs, i.e. the task is graph classification and there is a 100 classes. Hence, achieving 100% accuracy means that the GNN, based on the positional encodings, has been able to discriminate between all graphs. We only look at train accuracy here, since we’re interested in the power to overfit, not to generalize. Results can be found in Figure 2.
All positional encodings are able to solve that task after a sufficient amount of training, besides Adj10. Adj-5 and Adj-10 encode the adjacency matrix to the power of 5 and 10 respectively (at both points all graphs are fully connected). Adj-10 encodes between any two nodes the number of paths of length 10, number of path of length 9, and so on. The experiments indicate that too much such information confuses the GNN and makes it harder to discriminate between graphs. The shortest and Adj-5 positional encodings are the fastest at solving the task. This can be due to the fact that the Laplacian positional encoding is only unique up to a sign and that we randomly switch the sign during training.
E COMPUTATIONAL RUNTIME AND MEMORY USE
In our implementation, the step of computing the positional encodings as well as expanding the rhops of the graph is done in the same process for shortest-path and adjacency positional encodings; thus this step always occur and we found that implementing it via iterative matrix multiplications of the adjacency matrix gave the fastest results. How this scales with the r-size can be found in Figure 3. Since each increment of the r-size results in an additional matrix multiplication, the linear increase is expected. The spectral positional encoding has the same additive runtime per graph across r-sizes of 1.3 × 10−3 seconds. These matrix multiplications are done on CPU rather than GPUs, but running them on GPUs could results in speed-ups. However, the runtime for computing these positional encodings is at least an order of magnitude smaller (per graph) than the runtime for running the subsequent GNN on a GPU, so there was no need to optimize this runtime further.
In Figures 4 and 5 we include actual runtime of the GNN (on GPU) of different positional encodings and hops sizes, juxtaposed with the density of the modified graphs, for the ZINC and CIFAR10 datasets. Note, we are here excluding the computation of the positional encoding on the input graph, which can be found in Figure 3.
Most graphs to which GNNs are applied to are connected and typically the number of edges are greater than the number of nodes, i.e. ∣E∣ ≥ ∣V ∣. Since all established GNNs make use of the edges in one way or another, the number of edges usually determines the asymptotic behavior of the runtime and memory use, i.e. they are in O(∣E∣). With modern deep learning and specialized graph learning framework, GPU-parallelization and other more technical aspect affect memory and runtime. Thus, Figures 4 and 5 compare theoretical runtime (dominated by the density) with actual runtime of code run on GPUs. We find that density and actual runtime is strongly correlated. In Figure 6 we include the memory use for increasing radius on ZINC dataset, and find its roughly linear with the density as well.
F HOMOPHILY SCORE AND PERFORMANCE
We include experiments to investigate the correlation between homophily score (Ma et al., 2021) and performance when increasing hops size. This applies to the node classification datasets, CLUSTER and PATTERN, that we used. We split the test set into three buckets, which is just a sorted segmentation of the graphs with increasing homology scores. We evaluate trained Gated-GCN models with adjacency positional encodings for r-values 1 and 2 (at r = 2 almost all graphs are fully connected). See Tables 10 and 11 for results.
We find that high homophily score correlates much stronger with performance when r = 1 than it does at r = 2. This indicates that increased r-size diminishes the reliance on homophily as an inductive bias.
G OPTIMAL POSITIONAL ENCODING AND HOPS SIZE
Again, we recommend the adjacency positional encodings together with the CLS-node. We find that in terms of ranked performance on the 6 datasets, adjacency- and spectral positional encodings perform at the same level, but the spectral encoding perform considerably worse on the ZINC datasets, while the differences are smaller on the other datasets. The spectral encoding hardcode global-to-local information on the nodes and the size of the encoding-vector is a hyper parameter; we found performance not to be too sensitive to this hyper-parameter but future work could further investigate this. Spectral embeddings also use less memory as it does not encode its embeddings as edge-features; however, since information still is propagated along edges we find this memory saving to be significant but not asymptotically different. Adjacency encoding breaks down faster as the r-size is increased compare to the other positional encodings, we believe this to be due to the corresponding increase in size of the embedding-vectors and its introducing low-signal information that is also easy to overfit to, e.g. the number of paths of length 10 between two nodes (where any edge can be used multiple times). The Erdos experiments in Appendix D support this observation. However, all in all, the adjacency encoding stands out slightly considering the performance, runtime, memory use, and toy experiments. Furthermore, the CLS-node is part of the best performing configuration more times than it is not, and it has the additional advantage of leading to peak performance at lower r-sizes where in some cases it also has reduced runtime and memory use compared instead to increasing the r-size.
In this work we do not find a fixed r-size that is optimal for all datasets. The optimal r depends on the dataset and the amount of compute available. Given the fixed amount of compute used in our experiments, we found that all the best performance was found at r-size four or smaller. We provide heuristic for selecting a good r-size but ultimately it depends on the amount of compute and memory available. | 1. What is the main contribution of the paper regarding GNNs?
2. What are the strengths and weaknesses of the proposed method?
3. Do you have any concerns about the experimental results or comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a method to enlarge the receptive field of GNNs by augmenting graphs with r-hop neighborhoods, positional encodings and classification node. Extensive experiments are conducted on six benchmark graph datasets including ZINC, AQSOL, PATTERN, CLUSTER, MNIST, and CIFAR10 to demonstrate the effectiveness of the proposed method.
Strengths And Weaknesses
Pros:
The detailed experimental results in Section 6 are appreciated.
A thorough analysis of runtime and memory consumption is presented in the appendix which is helpful.
Cons:
The technical novelty of the paper is limited compared to previous works in multi-hop GNNs and positioning encoding of GNNs. Two pages are used to discuss position encodings in Section 4.2. However, it is not clear what is the novel contribution of this work on position encodings. Since this work does not propose a new position encoding, I recommend the authors compress Section 4.2 and move some content to related work.
It is not clear how the results of Table 1 are obtained in section 6. What value of the hyperparameter
r
and what GNN models are used? Is
r
the same across datasets or is it tuned for each dataset? The results seem to be aggregated from the best models in Table 2-6. If this is the case, the hyperparameter
r
are GNN layers are chosen on the test set results with heavy hyperparameter tuning which is not encouraged. Different
r
and with/without CLS nodes are used in different datasets which also makes in results inconclusive.
This work is a combination of multi-hop GNN methods and positioning encoding methods on graphs. I believe a comparison with multi-hop GNNs is necessary such as MixHop.
Minor:
"Section 4: we also add a fully-connected CLS node." "CLS" node should be clearly defined here.
[1] Abu-El-Haija, S., Perozzi, B., Kapoor, A., Alipourfard, N., Lerman, K., Harutyunyan, H., Ver Steeg, G. and Galstyan, A., 2019, May. Mixhop: Higher-order graph convolutional architectures via sparsified neighborhood mixing. In international conference on machine learning (pp. 21-29). PMLR.
Clarity, Quality, Novelty And Reproducibility
The work is presented clearly and is easy to follow. The experimental results are thorough and should not be difficult to reproduce. However, The technical novelty of the paper is limited compared to previous works in multi-hop GNNs and positioning encoding of GNNs. |
ICLR | Title
Rewiring with Positional Encodings for GNNs
Abstract
Several recent works use positional encodings to extend the receptive fields of graph neural network (GNN) layers equipped with attention mechanisms. These techniques, however, extend receptive fields to the complete graph, at substantial computational cost and risking a change in the inductive biases of conventional GNNs, or require complex architecture adjustments. As a conservative alternative, we use positional encodings to expand receptive fields to r-hop neighborhoods. More specifically, our method augments the input graph with additional nodes/edges and uses positional encodings as node and/or edge features. We thus modify graphs before inputting them to a downstream GNN model, instead of modifying the model itself. This makes our method model-agnostic, i.e. compatible with any existing GNN architectures. We also provide examples of positional encodings that are lossless with a one-to-one map between the original and the modified graphs. We demonstrate that extending receptive fields via positional encodings and a virtual fully-connected node significantly improves GNN performance and alleviates over-squashing using small r. We obtain improvements on a variety of models and datasets, and reach state-of-the-art performance using traditional GNNs or graph Transformers.
1 INTRODUCTION
GNN layers typically embed each node of a graph as a function of its neighbors’ (1-ring’s) embeddings from the previous layer; that is, the receptive field of each node is its 1-hop neighborhood. Hence, at least r stacked GNN layers are needed for nodes to get information about their r-hop neighborhoods. Barceló et al. (2020) and Alon and Yahav (2021) identify two broad limitations associated with this structure: under-reaching occurs when the number of layers is insufficient to communicate information between distant vertices, while over-squashing occurs when certain edges act as bottlenecks for information flow.
Inspired by the success of the Transformer in natural language processing (Vaswani et al., 2017), recent methods expand node receptive fields to the whole graph (Dwivedi and Bresson, 2021; Ying et al., 2021). Since they effectively replace the topology of the graph with that of a complete graph, these works propose positional encodings that communicate the connectivity of the input graph as node or edge features. As these methods operate on fully-connected graphs, the computational cost of each layer is quadratic in the number of nodes, obliterating the sparsity afforded by conventional 1-ring based architectures. Moreover, the success of the 1-ring GNNs suggests that local feature aggregation is a useful inductive bias, which has to be learned when the receptive field is the whole graph, leading to slow and sensitive training.
In this paper, we expand receptive fields from 1-ring neighborhoods to r-ring neighborhoods, where r ranges from 1 (typical GNNs) to R, the diameter of the graph (fully-connected). That is, we augment a graph with edges between each node and all others within distance r in the input topology. We show that performance is significantly improved using fairly small r and carefully-chosen positional encodings annotating this augmented graph. This simple but effective approach can be combined with any GNN.
Contributions. We apply GNN architectures to augmented graphs connecting vertices to their peers of distance ≤ r. Our contributions are as follows: (i) We increase receptive fields using a modified graph with positional encodings as edge and node features. (ii) We compare r-hop positional encodings on the augmented graph, specifically lengths of shortest paths, spectral computations, and
powers of the graph adjacency matrix. (iii) We demonstrate that relatively small r-hop neighborhoods sufficiently increase performance across models and that performance degrades in the fullyconnected setting.
2 RELATED WORK
The Transformer has permeated deep learning (Vaswani et al., 2017), with state-of-the-art performance in NLP (Devlin et al., 2018), vision (Parmar et al., 2018), and genomics (Zaheer et al., 2020). Its core components include multi-head attention, an expanded receptive field, positional encodings, and a CLS-token (virtual global source and sink nodes). Several works adapt these constructions to GNNs. For example, the Graph Attention Network (GAT) performs attention over the neighborhood of each node, but does not generalize multi-head attention using positional encodings (Veličković et al., 2018). Recent works use Laplacian spectra, node degrees, and shortest-path lengths as positional encodings to expand attention to all nodes (Kreuzer et al., 2021; Dwivedi and Bresson, 2021; Rong et al., 2020; Ying et al., 2021). Several works also adapt attention mechanisms to GNNs (Yun et al., 2019; Cai and Lam, 2019; Hu et al., 2020; Baek et al., 2021; Veličković et al., 2018; Wang et al., 2021b; Zhang et al., 2020; Shi et al., 2021).
Path and distance information has been incorporated into GNNs more generally. Yang et al. (2019) introduce the Shortest Path Graph Attention Network (SPAGAN), whose layers incorporate pathbased attention via shortest paths between a center node and distant neighbors, using an involved hierarchical path aggregation method to aggregate a feature for each node. Like us, SPAGAN introduces the ≤ k-hop neighbors around the center node as a hyperparameter; their model, however, has hyperparameters controlling path sampling. Beyond SPAGAN, Chen et al. (2019) concatenate node features, edge features, distances, and ring flags to compute attention probabilities. Li et al. (2020) show that distance encodings (i.e., one-hot feature of distance as an extra node attribute) obtain more expressive power than the 1-Weisfeiler-Lehman test. Graph-BERT introduces multiple positional encodings to apply Transformers to graphs and operates on sampled subgraphs to handle large graphs (Zhang et al., 2020). Yang et al. (2019) introduce the Graph Transformer Network (GTN) for learning a new graph structure, which identifies “meta-paths” and multi-hop connections to learn node representations. Wang et al. (2021a) introduce Multi-hop Attention Graph Neural Network (MAGNA) that uses diffusion to extend attention to multi-hop connections. Frankel et al. (2021) extend GAT attention to a stochastically-sampled neighborhood of neighbors within 5-hops of the central node. Isufi et al. (2020) introduce EdgeNets, which enable flexible multi-hop diffusion. Luan et al. (2019) generalizes spectral graph convolution and GCN in block Krylov subspace forms.
Each layer of our GNN attends to the r-hop neighborhood around each node. Unlike SPAGAN and Graph-BERT, our method is model agnostic and does not perform sampling, avoiding their sampling-ratio and number-of-iterations hyperparameters. Unlike GTN, we do not restrict to a particular graph structure. Broadly, our approach does not require architecture or optimization changes. Thus, our work also joins a trend of decoupling the input graph from the graph used for information propagation (Veličković, 2022). For scalability, Hamilton et al. (2017) sample from a node’s local neighborhood to generate embeddings and aggregate features, while Zhang et al. (2018) sample to deal with topological noise. Rossi et al. (2020) introduce Scalable Inception Graph Neural Networks (SIGN), which avoid sampling by precomputing convolutional filters. Kipf and Welling (2017) preprocess diffusion on graphs for efficient training. Topping et al. (2021) use graph curvature to rewire graphs and combat over-squashing and bottlenecks.
In contrast, our work does not use diffusion, curvature, or sampling, but expands receptive fields via Transformer-inspired positional encodings. In this sense, we avoid the inductive biases from pre-defined notions of diffusion and curvature, and since we do not remove connectivity, injective lossless changes are easy to obtain.
3 PRELIMINARIES AND DESIGN
Let G = (V,E, fv, fe) denote a graph with nodes V ⊂ N0 and edges E ⊆ V × V , and let G be the set of graphs. For each graph, let functions fv ∶ V → Rdv and fe ∶ E → Rde denote node and edge features, respectively. We consider learning on graphs, specially node classification and graph classification. At inference, the input is a graph G. For node classification, the task is to predict
a node label lv(v) ∈ R for each vertex v ∈ V . Using the node labels, the homophily of a graph is defined as the fraction of edges that connect nodes with the same labels (Ma et al., 2022). For graph classification, the task is to predict a label lG ∈ R for the entire graph G. Given the tasks above, GNN architectures typically ingest a graph G = (V,E, fv, fe) and output either a label or a per-node feature. One can view these as an abstraction; e.g. a GNN for graph classification is a map Fθ ∶ G → Rn with learnable parameters θ. These architectures vary in terms of how they implement Fθ. Some key examples include the following: (i) Spatial models (Kipf and Welling, 2017) use the graph directly, computing node representations in each layer by aggregating representations of a node and its neighbors (1-ring). (ii) Spectral models (Bruna et al., 2014) use the eigendecomposition of the graph Laplacian to perform spectral convolution. (iii) Diffusion models (Wang et al., 2021a; Klicpera et al., 2019) use weighted sums of powers of the adjacency matrix to incorporate larger neighborhoods (r-hops). (iv) In Transformers (Kreuzer et al., 2021; Dwivedi and Bresson, 2021; Rong et al., 2020; Ying et al., 2021), each node forms a new representation by self-attention over the complete graph (R-hop neighborhood) using positional encodings. These approaches incorporate useful inductive biases while remaining flexible enough to learn from data.
Spatial models have been extremely successful, but recent work shows that they struggle with underreaching and over-squashing (Alon and Yahav, 2021). Spectral approaches share similar convolutional bias as spatial models and face related problems (Kipf and Welling, 2017). On the other hand, Transformers with complete attention and diffusion aim to alleviate the shortcomings of spatial models and show promising results. Due to complete attention, Transformers carry little inductive bias but are also computationally expensive. Diffusion explicitly incorporates the inductive bias that distant nodes should be weighted less in message aggregation; limiting its breadth of applicability.
We alleviate under-reaching and over-squashing while avoiding the computational load of complete attention by incorporating a more general proximity bias than diffusion without committing to a specific model. Our method is built on the observation that Fθ can be trained to ingest modified versions of the original graph that better communicate structure and connectivity. Hence, we add new edges, nodes, and features to the input graph. To still convey the original topology of the input graph, we add positional encodings. More formally, we design functions g ∶ G → G that modify graphs and give features to the new nodes and edges. These functions can be prepended to any GNN Fθ ∶ G → Rn as Fθ ○ g ∶ G → Rn. The following are desiderata informing our design of g: (i) ability to capture the original graph, (ii) ability to incorporate long-range connections, (iii) computational efficiency, and (iv) minimal and flexible locality bias. By using positional encodings and maintaining the original graph G as a subgraph of the modified graph, we capture the original graph in our modified input (Section 4.2.1). By expanding the receptive field around each node to r-hop neighborhoods we reduce computational load relative to complete-graph attention, with limited inductive bias stemming from proximity. Additionally, expanded receptive fields alleviate under-reaching and over-squashing (Section 6.1).
4 APPROACH
We modify graphs before inputting them to a downstream GNN model, instead of modifying the model itself. Our approach does not remove edges or nodes in the original graph but only adds elements. Given input G = (V,E, fv, fe), we create a new graph G′ = (V ′,E′, f ′v, f ′e) such that G is a subgraph of G′. Expanded receptive fields are achieved in G′ by adding edges decorated with positional encodings as node or edge attributes; we also add a fully-connected CLS node. G′ is still a graph with node and edge attributes to which we may apply any GNN. This process is represented by a function g ∶ G → G. We decompose the construction of g into topological rewiring and positional encoding, detailed below. In a slight abuse of notation, we will subsequently use G to denote only the subset of graphs relevant to a given machine learning problem. For example, for graph regression on molecules, G denotes molecule graphs, with atoms as nodes and bonds as edges.
4.1 TOPOLOGICAL REWIRING
We modify the input graph G to generate G′ in two steps:
Expanded receptive field. Given a graph G = (V,E, fv, fe) ∈ G and a positive integer r ∈ N+, we add edges between all nodes within r hops of each other in G to create G′r = (V,E′, f ′v, f ′e). If G is annotated with edge features, we assign to each edge in E′/E an appropriate constant feature Ce. CLS node. Following Gilmer et al. (2017), we also include a “CLS”—or classification—node to our graph connected to all others. We follow this procedure: Given a graph G, we (i) initialize a new graph G′ = (V ′,E′, f ′v, f ′e) = G, (ii) add a new node vCLS to V ′, and (iii) set f ′v(vCLS) ∶= Cv for a constant Cv . Finally, we set E′ ∶= E ∪ ⋃v∈V {(vCLS, v), (v, vCLS)}, with f ′e((vCLS, v)) = f ′e((v, vCLS)) ∶= Ce, where Ce is defined above.
4.2 POSITIONAL ENCODINGS
Given only the connectivity of a rewired graph G′r = (V ′,E′, f ′v, f ′e) from the two-step procedure above, it may not be possible to recover the connectivity of the original graph G = (V,E, fv, fe). In the extreme, when r is large and G is connected, G′r could become fully-connected, meaning that all topology is lost—removing the central cue for graph-based learning. To combat this, we encode the original topology of G into G′r via positional encodings, which are node and/or edge features. We consider several positional encoding functions for edges pe ∶ G × V ′ × V ′ → Rn or nodes pv ∶ G × V ′ → Rn, appending the output of pe as edge or pv as node features to G′r. Section 4.2.1 lays out properties to compare choices of pe and/or pv . Then, Section 4.2.2 provides concrete positional encodings compared in our experiments that trade off between the properties we lay out.
4.2.1 PROPERTIES OF POSITIONAL ENCODINGS
There are countless ways to encode the subgraph topology of G within G′ in vertex features pv or edge features pe. Below, we state a few properties we can check to give a framework for comparing the capabilities and biases of possible choices.
Lossless encoding. While a GNN can ignore information in input G′, it cannot reconstruct information that has been lost in constructing G′ from G. Yet, there can be benefits in forgetting information, e.g. when dealing with noisy graphs or incorporating a stronger inductive bias (Rossi et al., 2020; Klicpera et al., 2019). That said, a simple property to check for G′ equipped with positional encoding features pe, pv is whether we can recover G from this information, that is, whether our encoding is lossless (or non-invasive). As long as it is possible to identify G within g(G), g is an injection and non-invasive. Hence, a sufficient condition for lossless positional encodings is as follows: If all edges in G′ have unique positional encodings, then g ∶ G → G is a bijection. One way to achieve this condition is to use an additional edge feature that is unique to the 1-ring.
Discriminative power. Following work investigating the discriminative power of GNNs (Xu et al., 2019; Brüel Gabrielsson, 2020), Ying et al. (2021) showed that expanded receptive fields together with shortest-path positional encodings are strictly more powerful than the 1-Weisfeiler-Lehman (WL) test and hence more powerful than 1-hop vanilla spatial GNN models (Xu et al., 2019). The combination of increased receptive fields, positional encodings, and choice of subsequent GNN models determines discriminative power. In fact, it follows from (Ying et al., 2021) that the positional encodings presented below together with an increased receptive field r > 1 and a vanilla spatial GNN model are strictly more powerful than the 1-WL test.
Computational time. Positional encodings may come at substantial computational cost when working with r-hop neighborhoods. The cost of computing positional encodings affects total inference time, which may be relevant in some learning settings. However, in our setting the runtime of computing positional encodings is an order of magnitude less than the subsequent inference time, and in our implementation the asymptotic runtimes of computing the positional encodings are the same. See Appendix E.
Local vs. global. The positional encoding of a vertex or edge can be local, meaning it incorporates information from a limited-sized neighborhood in G, or global, in which case adding or removing a node anywhere in G could affect all the positional encodings.
Inductive bias. Our positional encodings can bias the results of the learning procedure, effectively communicating to the downstream GNN which properties of G and G′ are particularly important for learning. Without positional encodings, our model would induce a bias stating that distances < r in our graph are insignificant. More subtly, suppose ℓ is the distance (of length ≤ r) between two
nodes in G corresponding to a new edge in E′. Using ℓ directly as positional encoding rather than a decaying function, e.g. e−αℓ, makes it easier or harder (resp.) to distinguish long distances in G.
A related consideration involves whether our model can imitate the inductive bias of past work. For example, graph diffusion has been used to incorporate multi-hop connections into GNNs using fixed weights (Wang et al., 2021a). We can ask whether our positional encodings on G′ are sufficient to learn to imitate the behavior of a prescribed multi-hop model on G, e.g. whether a layer of our GNN applied to G′ can capture multi-hop diffusion along G.
Over-squashing and under-reaching. Section 6.1 demonstrates, via the NeighborsMatch problem (Alon and Yahav, 2021), that increased receptive fields as well as the CLS-node alleviate oversquashing; however, this toy problem is concerned with matching node attributes and not with graph topology. We want positional encodings that alleviate over-squashing in the sense that it enables effective information propagation for the task at hand. Our experiments showing that expanded receptive fields alleviate the over-squashing problem and that the best performing positional encoding varies across datasets showcase this. Additionally, our experiments on the discriminative power of positional encodings in Appendix D further help discern the different options.
4.2.2 POSITIONAL ENCODING OPTIONS
Taking the properties above into consideration, we now give a few options for positional encodings below, compared empirically in Section 6.
Shortest path. For any edge e ∈ G′r, the shortest-path positional encoding takes pe ∈ {0,1, . . . , r} to be the integer length of the shortest path in G between the corresponding nodes of E. These embeddings are lossless because G is the subgraph of g(G) with pe = 1. They also are free to compute given our construction of G′r from G. But, multiple vertices in the r-neighborhood of a vertex in V could have the same positional encoding in V ′, and shortest path lengths are insufficient to capture complex inductive biases of multi-hop GNNs like diffusion over large neighborhoods. Shortest-path positional encoding was previously used by Ying et al. (2021), for extending G to a fully-connected graph, but they did not consider smaller r values.
Spectral embedding. Laplacian eigenvectors embed graph vertices into Euclidean space, providing per-vertex features that capture multi-scale graph structure. They are defined by factorizing the graph Laplacian matrix, ∆ = I−D−1/2AD−1/2, where D is the degree matrix and A is the adjacency matrix. We call the result a spectral positional embedding. We can use the q smallest non-trivial Laplacian eigenvectors of G as a node-based positional encoding pv ∶ V ′ → Rq . Following Dwivedi et al. (2020), since these eigenvectors are known only up to a sign, we randomly flip the sign during training. Prior work consider Laplacian eigenvectors as additional node features without topological rewiring (Dwivedi et al., 2020).
Spectral positional encodings do not necessarily make g injective. Even when q = ∣V ∣, this encoding fails to distinguish isospectral graphs (Von Collatz and Sinogowitz, 1957), but these are rarely encountered in practice. On the other hand, spectral signatures are common for graph matching and other tasks. Moreover, unlike the remaining features in this section, spectral positional encodings capture global information about G rather than only r-neighborhoods. Finally, we note that the diffusion equation for graphs can be written as ut = −∆u; this graph PDE can be solved in closed-form given the eigenvectors and eigenvalues of ∆. Hence, given the spectral embedding of G in G′, we can simulate diffusion-based multi-hop GNN architectures up to spectral truncation.
Powers of the adjacency matrix. Our final option for positional encoding generalizes the shortest path encoding and can capture the inductive biases of diffusion-based GNNs. The entry at position (i, j) of the k-th power Ak of the adjacency matrix A of graph G gives the number of paths of length k between node i and j in G. Concatenating the powers from k = 1, . . . , r, we get for each edge e in G′ an integer vector pe ∈ Nr giving the powers of the adjacency matrix positional encoding. This embedding can be used to recover the shortest-path embedding. This adjacency-derived embedding can also generalize the inductive bias of diffusion-based multi-hops GNNs. In particular, diffusion aggregation weights are often approximated using a Taylor series, W = ∑∞i=0 θiAi ≈ ∑ri=0 θiAi ∶= W , where θi are a prescribed decaying sequence (θi > θi+1). The entries of W above can be computed linearly from the adjacency-powers positional encoding. Hence, it is strictly more general than using prescribed diffusion-based aggregation weights on G.
Lossless encodings. The previously discussed lossless-encoding properties of our graph rewiring method are accomplished by two of the above-mentioned positional encodings:
Proposition 1. Shortest-path and adjacency matrix positional encodings yield lossless rewirings.
Proof. Recovering the original graph G = (V,E) from the rewired graph G′ = (V,E′) is almost trivial. With the shortest-path position encoding the original graph can be recovered via E = {e∣e ∈ E′, pe = 1} and for powers-of-the-adjacency-matrix encodings via E = {e∣e ∈ E′, (pe)1 = 1}.
5 IMPLEMENTATION DETAILS
Our method is compatible with most GNN architectures. Here we adopt GatedGCN (Bresson and Laurent, 2018), MoNet (Monti et al., 2017), and an implementation of the Transformer (Vaswani et al., 2017); see Appendix B for details. For each model, we consider graph rewiring with a different r-hop receptive field around each node, and compare with and without the CLS-node, as well as the three positional encodings introduced in Section 4.2.2.
Input and readout layers. Typically, GNNs on a graph G = (V,E, fv, fe) first embed node features fv and edge features fe through a small feed-forward network (FFN) input layer. When incorporating positional encodings per edge/node, we embed using a small FFN and add them at this input layer. After this layer, it updates node and edge representations through successive applications of GNN layers. Lastly, a readout layer is applied to the last GNN layer L. For node classification, it is typically a FFN applied to each node feature hLi . For graph classification, it is typically an FFN applied to the mean or sum aggregation of all node features hL. For graph classification and when using the CLS-node, we aggregate by applying the FFN to the CLS-node’s features in the last layer.
6 EXPERIMENTS
We evaluate performance on six benchmark graph datasets: ZINC, AQSOL, PATTERN, CLUSTER, MNIST, and CIFAR10 from (Dwivedi et al., 2020). The benchmark includes a training time limit of 12 hours; we use similar compute to their work via a single TeslaV100 GPU. Training also stops if for a certain number of epochs the validation loss does not improve (Dwivedi et al., 2020). Thus, our experiments consider the ease of training and efficient use of compute. For the first two datasets, we run GatedGCN, MoNet, and Transformer to show that rewiring and positional encoding work for different models; for the other datasets we run only GatedGCN to focus on the effects of receptive field size, the CLS node, and positional encodings. For all datasets, we run with increasing receptive fields, with different positional encodings, and with or without the CLS-node. In the tables, density is the average of the densities (defined as the ratio ∣E∣/∣V ∣2) of each graph in the dataset rewired to the respective receptive field size. See Appendix A for details.
Table 1 compares our best results with other top performing methods and models. All our top performing models come from the GatedGCN, although the Transformer performs comparably; however, the Transformer was harder to train—see Appendix B. MoNet performs worse but still sees significant improvements from our approach. Our GatedGCN implementation was taken from the same work (Dwivedi et al., 2020) that introduced the benchmarks and code that we use. Thus, hyperparameters might be better adapted to the GatedGCN. This highlights the benefits of our modelagnostic approach, which allows us to pick the best models from Dwivedi et al. (2020) and combine them with our methods. Our approach with 100K parameters achieves state-of-the-art on all datasets among models with 100K parameters and even outperforms 500K-parameter models.
ZINC, Graph Regression. ZINC consists of molecular graphs and the task is graph property regression for constrained solubility. Each ZINC molecule is represented as a graph of atoms with nodes and bonds as edges. In Table 2 we present results for r from 1 to 10. The density column shows that these graphs are sparse and that the number of edges increases almost linearly as the receptive field r is increased. Performance across all settings noticeably improves when increasing r above 1. Top performance is achieved with the CLS-node and powers-of-the-adjacency positional encoding at r = 4, and at 52% of the edges and compute compared to complete attention. When using the CLS node and/or spectral positional encodings, top performance generally occurs at lower r, which is likely due to the global nature of these changes to the graphs. The GatedGCN and
Transformer perform comparably for the same settings, with a slight edge to the GatedGCN. The two models show the same performance trends between settings, i.e., both increased receptive fields and the CLS-node boost performance. Further, Ying et al. (2021) include a performance of 0.123 on ZINC with their Graphormer(500K), i.e., a Transformer with positional encodings and complete attention. However, their training is capped at 10,000 epochs while ours is capped at 1,000 epochs; training their Graphormer(500K) with same restrictions leads to a score of 0.26 on ZINC.
AQSOL, Graph Regression. AQSOL consists of the same types of molecular graphs as ZINC. The densities of AQSOL graphs are slightly higher than those of ZINC. For all settings not including CLS-node or spectral positional encodings, performance improves significantly when increasing r above 1 (see Table 3); in these settings, better performing r are larger than for ZINC. However, when including CLS node or spectral positional encodings, performance changes much less across different r. This indicates the importance of some form of global bias on this dataset. At least one of larger r values, spectral positional encoding, or the CLS-token is required to provide the global bias, but the effect of them differs slightly across the two models. GatedGCN performs significantly better, and larger r-values still boosts performance when combined with the CLS-token for MoNet, but not for GatedGCN. MoNet uses a Bayesian Gaussian Mixture Model (Dempster et al., 1977) and since MoNet was not constructed with edge-features in mind, we simply add edge embeddings to the attention coefficients. Not surprisingly, this points to the importance of including edge features for optimal use of expanded receptive fields and positional encodings.
CLUSTER, Node Classification. CLUSTER is a node classification dataset generated using a stochastic block model (SBM). The task is to assign a cluster label to each node. There is a total of 6 cluster labels and the average homophily is 0.34. CLUSTER graphs do not have edge features. Table 4 gives results for r-hop neighborhoods from 1 to 3. As can be seen in the density column, at r = 3 all graphs are fully connected, and more than 99% of them are fully connected at r = 2. Hence, these graphs are dense. Significant improvements are achieved by increasing r for all but the spectral positional encoding (again showcasing its global properties), which together with the CLS node perform competitively at r = 1. The CLS node is helpful overall, especially at r = 1. The GatedGCN and Transformer perform comparably for all but the spectral positional encodings where the Transformer breaks down. We found that this breakdown was due to the placement of batch normalization, discussed in Appendix B.1.
PATTERN, Node Classification. The PATTERN dataset is also generated using a SBM model, and has an average homophily of 0.66. The task is to classify the nodes into two communities and graphs have no edge features. Table 5 shows results for r-hops from 1 to 3. Similarly to CLUSTER, the density column shows that the graphs are dense. Significant improvements are achieved by increasing r > 1 and/or using the CLS-node. Performance generally decreases at r = 3. Similarly to CLUSTER, the CLS-node helps at r = 1, but for both CLUSTER and PATTERN, top performing model comes from a larger r > 1 without the CLS-node, suggesting that trade-offs exist between CLS-node and increased receptive fields. Compared to CLUSTER, our approach shows less performance boost for PATTERN, which lead us to hypothesize that our approach is more helpful for graphs with low homophily which we investigate further in Appendix F.
MNIST, Graph Classification. MNIST is an image classification dataset converted into super-pixel graphs, where each node’s feature includes super-pixel coordinates and intensity. The images are of handwritten digits, and the task is to classify the digit. Table 6 summarizes results for r from 1 to 3. Not all graphs are fully connected at r = 3, but training at r = 4 exceeds our memory limit. Noticeable performance gains are achieved at r = 2, but performance generally decreases at r = 3. The CLS-node consistently improves performance at r = 1 but not otherwise, indicating that the CLS-node and increased r-size have subsumed effects.
CIFAR10, Graph Classification. CIFAR10 is an image classification dataset converted into superpixel graphs, where each node’s features are the super-pixel coordinates and intensity. The images consist of ten natural motifs, and the task is to classify the motif, e.g., dog, ship, or airplane. Table 7 provides results for r from 1 to 3. Not all graphs are fully connected at r = 3, but training at r = 4 led to out-of-memory issues. Top performing versions are all at r = 1, and performance degrades for r > 1. As with MNIST, the CLS-node only improves performance at r = 1, again indicating its shared (subsumed) effects with increased r-sizes.
6.1 NEIGHBORSMATCH, OVER-SQUASHING
Alon and Yahav (2021) introduce a toy problem called NeighborsMatch to benchmark the extent of over-squashing in GNNs, while controlling over-squashing by limiting the problem radius rp. The graphs in the dataset are binary trees of depth equal to the problem radius rp. Thus, the graphs are
structured and sparse, and the number of edges grows linearly with the increased receptive field r. See Figure 1, Appendix C, for results with GatedGCN. Increasing the receptive field r with a step of 1 increases the attainable problem radius with a step of 1, while using the CLS-node at r = 1 falls in between the performance of r = 2 and r = 3 but with a much longer tail. Thus, this further showcases the subsumed as well as different effect (complementary and conflicting) the receptive field and the CLS-node have, as also observed on the other benchmarks.
6.2 COMPUTATIONAL ANALYSIS
For all positional encodings, the number of edges determines the asymptotic runtime and memory use. The CLS-node only introduces an additive factor. Figures 4 and 5 in Appendix E show that the runtime in practice scales roughly the same as the density, as the receptive field size is increased; though real runtime has a significant constant factor.
6.3 SELECTING POSITIONAL ENCODING AND HOPS SIZE
We recommend the adjacency positional encodings together with the CLS-node. In terms of ranked performance across the 6 datasets, adjacency- and spectral positional encodings perform the same, but the spectral encoding performs considerably worse on the ZINC dataset, while the differences are smaller on the other datasets. Additional experiments in Appendix D, Figure 2, assess the discriminative power of the different encodings. However, there is no positional encoding superior in all aspects. Instead, each one has unique benefits as well as drawbacks. This is made apparent by considering r as a parameter and observing the performance differences across values of r. Furthermore, the CLS-node is part of the best-performing configuration more often than not. Similarly, no fixed r is optimal for all datasets. Instead, optimal r depends on the dataset and the amount of compute. Appendix F shows that increased r diminishes the reliance on homophily as an inductive bias, and thus low homophily of a dataset could be used as an indicator for selecting an increased r. If the density does not change much from a change in r then neither does performance. The use of the spectral positional encodings, the CLS-node, or increased r have subsuming effects for multiple datasets; here the CLS-node or spectral positional encodings may be preferred, computationally cheaper, alternatives to increasing r.
From this empirical study, for picking optimal r, we recommend computing the densities for increasing r and picking the first one where the average density exceeds 0.5 to reap most of the performance boosts. This seems to maintain a helpful locality bias as well as to significantly reduce the compute compared to complete attention. See Appendix G for further discussion.
7 DISCUSSION
Our simple graph rewiring and positional encodings achieve state-of-the-art performance, widening receptive fields while alleviating over-squashing. This is much due to the ability to easily apply our method to models that stem from a large body of work on GNNs, highlighting the benefits of our model-agnostic approach.
The reality is that attention with complete receptive fields is still computationally intractable for most practitioners and researchers. However, here we show that significant performance boosts via attention and increased receptive fields can be obtained by increasing the receptive field only slightly. Thus, opening up recent work to a broader range of practitioners as well as giving more fair conditions for comparing GNNs. In addition, the systematic investigation of increased receptive fields and positional encodings gives further insights into the necessity of homophily for the success of GNNs and highlights other implicit biases in GNN architectures.
A TRAINING DETAILS
Both code and training follow Dwivedi et al. (2020) closely, and to a lesser extent (Dwivedi and Bresson, 2021), which uses the same code base.
Like (Dwivedi et al., 2020), we use the Adam optimizer (Kingma and Ba, 2015) with the same learning rate decay strategy. The initial learning rate is set to 10−3 and is reduced by half if the validation loss does not improve after a fixed (”lr schedule patience”) number of epochs, either 5 or 10. Instead of setting a maximum number of epochs, the training is stopped either when the learning rate has reached 10−6 or when the computational time reaches 12 hours (6 hours for NeighborsMatch). Experiments are run with 4 different seeds; we report summary statistics from the 4 results.
Below we include training settings for the different datasets.
A.1 ZINC
"model": GatedGCN and Transformer, "batch_size": 128, "lr_schedule_patience": 10, "max_time": 12
A.2 AQSOL
"model": GatedGCN and MoNet, "batch_size": 128, "lr_schedule_patience": 10, "max_time": 12
A.3 CLUSTER
"model": GatedGCN and Transformer, "batch_size": 48 (GatedGCN), 32 or 16 (Transformer), "lr_schedule_patience": 5, "max_time": 12
A.4 PATTERN
"model": GatedGCN, "batch_size": 48, "lr_schedule_patience": 5, "max_time": 12
A.5 MNIST
"model": GatedGCN, "batch_size": 128, "lr_schedule_patience": 10, "max_time": 12
A.6 CIFAR10
"model": GatedGCN, "batch_size": 128, "lr_schedule_patience": 10, "max_time": 12
A.7 NEIGHBORSMATCH
"model": GatedGCN,
"batch_size": 256, "lr_schedule_patience": 10, "max_time": 6
B TRANSFORMER IMPLEMENTATION
We implemented a simple version of the Transformer adapted to graphs:
ĥli =BN(hl−1i ) ˆ̂ hli = ∥Hk=1 ( ∑
j∈Ni∪{i} al,ki,jW l kĥ l−1 j ) + hl−1i
hli =FFN(BN( ˆ̂ hli)) + ˆ̂ hli
with êli,j =BN(el−1i,j ) âl,ki,j =((A l kĥ l i)T (Blkĥlj) +Clkêli,j)/d
al,ki,j = exp(âl,ki,j )
∑j′∈Ni∪{i} exp(â l,k i,j′)
eli,j =FFN(êli,j) + el−1i,j
Here, h and e are node and edge features (resp.) from the previous layer. Wk,A,B ∈ Rd/H×d and C ∈ R1×d are learnable weight-matrices, H is the number of attention heads, and BN is short for batch normalization. ∥Hk=1 denotes the concatenation of the attention heads.
B.1 DESIGN CHOICES AND CHALLENGES
There are many variations on the Transformer model. Following Ying et al. (2021), we put the normalization before the multi-head attention, which caused instability when training on CLUSTER with Laplacian (spectral) positional encodings. This was fixed by putting the normalization after or using layer normalization instead of batch normalization; however, these changes reduced performance on ZINC. While the GatedGCN worked well with identical architecture parameters across datasets, we found that the Transformer needed more variations to stay competitive on MNIST and CIFAR10; in particular, fewer layers and larger hidden dimensions.
Transformers use multi-head attention which puts number-of-heads dimension vectors on each edge—seen as directed. Hence, the memory load becomes 2× ∣E∣× num heads (in our experiments, num heads = 6), which compared for GatedGCN is only 2 × ∣E∣. This causes a memory bottleneck for the Transformer that may force one to use a reduced batch size to avoid memory issues.
B.2 OTHER VARIANTS
We implemented other variants, including more involved Transformers. As in (Vaswani et al., 2017), we ran the path-integers through sine and cosine functions of different frequencies, and inspired by (Dai et al., 2019; Ke et al., 2020) we implemented a more involved incorporation of relative positions in the multi-head attention (see below); however, we found performance to be comparable.
In natural language processing, the input is a sequence (a line graph) x = (x1, . . . , xn) of text tokens from a vocabulary set V , with each token having a one-hot-encoding fV ∶ V → [0,1]∣V ∣. The word embeddings E ∈ Rn×d for n tokens are formed as E = (WembedfV(xi) ∣ xi ∈ x) where Wembed ∈ Rd×∣V ∣ is a learnable weight matrix. The original Transformer model used absolute positional encodings. This means that we add the positional encoding to the node embedding at the input layer. Consider a positional encoding function pe ∶ N0 → Rd. Then the first input is
h0 = (WembedfV(xi) + pe(i) ∣ i = 1, . . . , n) = E +U
where U = (pe(i) ∣ i = 0, . . . n) ∈ Rn×d. Typically pe contains sine and cosine functions of different frequencies:
pe(k,2 × l) = sin(k/10000(2×l)/d) pe(k,2 × l + 1) = cos(k/10000(2×l+1)/d)
where k ∈ N is the position and l ∈ N is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from 2π to 10000 × 2π. This function was chosen because it was hypothesized that it would allow the model to easily learn to attend by relative positions, since for any fixed offset m, pe(k +m) is a linear function of pe(k). It was also hypothesized that it may allow the model to extrapolate to sequence lengths longer than the ones encountered during training.
In many cases, absolute positional encodings have been replaced with relative fully learnable positional encodings and relative partially learnable positional encodings (Dai et al., 2019). To justify these, consider the first attention layer with absolute positional encodings:
Aabsi,j = ExiWqWTk ETxj +ExiWqW T k U T j +UiWqWTk ETxj +UiWqW T k U T j
For relative (fully and partially) learnable positional encodings we have instead:
Areli,j = ExiWqWTk,EETxj +ExiWqW T k,RR T i−j + uWTk,EETxj + vW T k,RR T i−j
where u, v ∈ R1×d are learnable weights and Ri−j ∈ R1×d is a relative positional encoding between i and j. Each term has the following intuitive meaning: term (1) represents content-based addressing, term (2) captures a content-dependent positional bias, term (3) governs a global content bias, and (4) encodes a global positional bias.
For relative fully learnable positional encodings, WTk,RR T i−j is a learnable weight in Rd×1 for each i− j ∈ N, while for relative partially learnable positional encodings Ri,j = pe(∣i− j∣) where pe is the sinusoidal function from before.
We implemented both fully and partially learnable positional encodings for the shortest-path positional encodings (integer-valued) and related versions for the other positional encodings (in Rd). We include results in Tables 8 and 9.
C OVER-SQUASHING
Results for over-squashing experiment can be found in Figure 1.
D ADDITIONAL EVALUATION OF POSITIONAL ENCODINGS
Here we provide a start to toy data and a task for comparing positional encodings. In this task we wish to assess how powerful the positional encodings are in practice, i.e. how well they discriminate between different graph isomorphism classes. Specifically, we generate 100 random Erdos graphs and then expand the receptive field so that the graph is fully connected. Thus, the positional encodings become the mere instrument for communicating the connectivity/topology of the original graph. The task is to retrieve a specific graph among all 100 graphs, i.e. the task is graph classification and there is a 100 classes. Hence, achieving 100% accuracy means that the GNN, based on the positional encodings, has been able to discriminate between all graphs. We only look at train accuracy here, since we’re interested in the power to overfit, not to generalize. Results can be found in Figure 2.
All positional encodings are able to solve that task after a sufficient amount of training, besides Adj10. Adj-5 and Adj-10 encode the adjacency matrix to the power of 5 and 10 respectively (at both points all graphs are fully connected). Adj-10 encodes between any two nodes the number of paths of length 10, number of path of length 9, and so on. The experiments indicate that too much such information confuses the GNN and makes it harder to discriminate between graphs. The shortest and Adj-5 positional encodings are the fastest at solving the task. This can be due to the fact that the Laplacian positional encoding is only unique up to a sign and that we randomly switch the sign during training.
E COMPUTATIONAL RUNTIME AND MEMORY USE
In our implementation, the step of computing the positional encodings as well as expanding the rhops of the graph is done in the same process for shortest-path and adjacency positional encodings; thus this step always occur and we found that implementing it via iterative matrix multiplications of the adjacency matrix gave the fastest results. How this scales with the r-size can be found in Figure 3. Since each increment of the r-size results in an additional matrix multiplication, the linear increase is expected. The spectral positional encoding has the same additive runtime per graph across r-sizes of 1.3 × 10−3 seconds. These matrix multiplications are done on CPU rather than GPUs, but running them on GPUs could results in speed-ups. However, the runtime for computing these positional encodings is at least an order of magnitude smaller (per graph) than the runtime for running the subsequent GNN on a GPU, so there was no need to optimize this runtime further.
In Figures 4 and 5 we include actual runtime of the GNN (on GPU) of different positional encodings and hops sizes, juxtaposed with the density of the modified graphs, for the ZINC and CIFAR10 datasets. Note, we are here excluding the computation of the positional encoding on the input graph, which can be found in Figure 3.
Most graphs to which GNNs are applied to are connected and typically the number of edges are greater than the number of nodes, i.e. ∣E∣ ≥ ∣V ∣. Since all established GNNs make use of the edges in one way or another, the number of edges usually determines the asymptotic behavior of the runtime and memory use, i.e. they are in O(∣E∣). With modern deep learning and specialized graph learning framework, GPU-parallelization and other more technical aspect affect memory and runtime. Thus, Figures 4 and 5 compare theoretical runtime (dominated by the density) with actual runtime of code run on GPUs. We find that density and actual runtime is strongly correlated. In Figure 6 we include the memory use for increasing radius on ZINC dataset, and find its roughly linear with the density as well.
F HOMOPHILY SCORE AND PERFORMANCE
We include experiments to investigate the correlation between homophily score (Ma et al., 2021) and performance when increasing hops size. This applies to the node classification datasets, CLUSTER and PATTERN, that we used. We split the test set into three buckets, which is just a sorted segmentation of the graphs with increasing homology scores. We evaluate trained Gated-GCN models with adjacency positional encodings for r-values 1 and 2 (at r = 2 almost all graphs are fully connected). See Tables 10 and 11 for results.
We find that high homophily score correlates much stronger with performance when r = 1 than it does at r = 2. This indicates that increased r-size diminishes the reliance on homophily as an inductive bias.
G OPTIMAL POSITIONAL ENCODING AND HOPS SIZE
Again, we recommend the adjacency positional encodings together with the CLS-node. We find that in terms of ranked performance on the 6 datasets, adjacency- and spectral positional encodings perform at the same level, but the spectral encoding perform considerably worse on the ZINC datasets, while the differences are smaller on the other datasets. The spectral encoding hardcode global-to-local information on the nodes and the size of the encoding-vector is a hyper parameter; we found performance not to be too sensitive to this hyper-parameter but future work could further investigate this. Spectral embeddings also use less memory as it does not encode its embeddings as edge-features; however, since information still is propagated along edges we find this memory saving to be significant but not asymptotically different. Adjacency encoding breaks down faster as the r-size is increased compare to the other positional encodings, we believe this to be due to the corresponding increase in size of the embedding-vectors and its introducing low-signal information that is also easy to overfit to, e.g. the number of paths of length 10 between two nodes (where any edge can be used multiple times). The Erdos experiments in Appendix D support this observation. However, all in all, the adjacency encoding stands out slightly considering the performance, runtime, memory use, and toy experiments. Furthermore, the CLS-node is part of the best performing configuration more times than it is not, and it has the additional advantage of leading to peak performance at lower r-sizes where in some cases it also has reduced runtime and memory use compared instead to increasing the r-size.
In this work we do not find a fixed r-size that is optimal for all datasets. The optimal r depends on the dataset and the amount of compute available. Given the fixed amount of compute used in our experiments, we found that all the best performance was found at r-size four or smaller. We provide heuristic for selecting a good r-size but ultimately it depends on the amount of compute and memory available. | 1. What is the focus of the paper regarding positional encodings for graph neural networks?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of model agnosticism, scalability, and experimental limitations?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. What are some of the experiments that the reviewer suggests to validate the approach further?
5. How does the reviewer compare the success of adjacency powers in encoding different orders of transitive information? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
Main concern. A catalog of experiments to find the best combination of positional encoders for any explored type of graphs. The most successful encoding (powers of adjacency matrices) is not scalable unless large graphs are previously sampled. The argument that the powers of adjacencies for small r=1,2 beats full attention is true, but this is not enough to validate the approach.
I agree with the model agnosticism of the proposal but it is not inductive at all (positional encodings must be computed for any input graph in the case of graph classification).
POSITIONAL ENCODINGS: ONLY TESTED WITH SBMs and KNN-GRID graphs. The conclusions obtained for spectral positional encoding match the intuition and can be expected by the analysis of the type of graph. In any case, incorporating the above analysis makes the paper less “empirical” and more “principled”.
a) In SBM; graphs is logical that spectral positional encoding do not work well for r>1 because the structure of the SBM is encoded by the first K non-trivial Laplacian eigenvalues if there are K communities (depending on the inter-class structural noise). For instance, a nice experiment should be to track the performance of the positional encoders as the inter-class structural noise increases. In the experiments in the paper one can infer that if the spectral info is not useless for r<3 in CLUSTER/PATTERN is because the interclass structural noise is large
b) On the other hand, KNN graphs (e.g. CIFAR) or GRID graphs (MNIST) are typically broken in two communities just using the first non-trivial eigenvector (Fiedler vector). For this analysis see “SPECTRE: Spectral Conditioning Helps to Overcome the Expressivity Limits of One-shot Graph Generators”.
Regarding the success of adjacency powers (which generalize shortest paths), it is interesting to confirm how not too many hops are needed in general. I suggest addressing these powers in terms of how powerful are to encode different orders of transitive information. However, the main drawback is that this method does not scale well in real-life graphs (e.g. reported out-of-memory in MNIST).
There are no experiments on SOCIAL NETWORKS (e.g. power-law) where hubs do exist and incorporate naturally the CLS-node concept. They also make shortest paths almost uniform (unit length), E.g. in the case of Facebook (“friends circles”) with two-step (r=2) separation, the proposed positional encoders become more ambiguous.
Strengths And Weaknesses
Strength: Model agnosticism is interesting,
Weaknesses: As stated above, the approach is neither inductive (positional encodings must be computed for every input graph in graph classification) nor scalable (e.g. the computation of transitive information even when r=1,2 may be prohibitive). The number of experimental baselines is very limited.
Clarity, Quality, Novelty And Reproducibility
Clarity. The paper well-written and easy to follow.
Quality. Can be considered as an "empirical" paper whose objective is to elucidate the role of transitivity in certain types of graphs.
Novelty. Moderate (incremental wrt positional encodings in Transformers).
Reproducibility. Code follows the Transformer Implementation. No code was released. |
ICLR | Title
Rewiring with Positional Encodings for GNNs
Abstract
Several recent works use positional encodings to extend the receptive fields of graph neural network (GNN) layers equipped with attention mechanisms. These techniques, however, extend receptive fields to the complete graph, at substantial computational cost and risking a change in the inductive biases of conventional GNNs, or require complex architecture adjustments. As a conservative alternative, we use positional encodings to expand receptive fields to r-hop neighborhoods. More specifically, our method augments the input graph with additional nodes/edges and uses positional encodings as node and/or edge features. We thus modify graphs before inputting them to a downstream GNN model, instead of modifying the model itself. This makes our method model-agnostic, i.e. compatible with any existing GNN architectures. We also provide examples of positional encodings that are lossless with a one-to-one map between the original and the modified graphs. We demonstrate that extending receptive fields via positional encodings and a virtual fully-connected node significantly improves GNN performance and alleviates over-squashing using small r. We obtain improvements on a variety of models and datasets, and reach state-of-the-art performance using traditional GNNs or graph Transformers.
1 INTRODUCTION
GNN layers typically embed each node of a graph as a function of its neighbors’ (1-ring’s) embeddings from the previous layer; that is, the receptive field of each node is its 1-hop neighborhood. Hence, at least r stacked GNN layers are needed for nodes to get information about their r-hop neighborhoods. Barceló et al. (2020) and Alon and Yahav (2021) identify two broad limitations associated with this structure: under-reaching occurs when the number of layers is insufficient to communicate information between distant vertices, while over-squashing occurs when certain edges act as bottlenecks for information flow.
Inspired by the success of the Transformer in natural language processing (Vaswani et al., 2017), recent methods expand node receptive fields to the whole graph (Dwivedi and Bresson, 2021; Ying et al., 2021). Since they effectively replace the topology of the graph with that of a complete graph, these works propose positional encodings that communicate the connectivity of the input graph as node or edge features. As these methods operate on fully-connected graphs, the computational cost of each layer is quadratic in the number of nodes, obliterating the sparsity afforded by conventional 1-ring based architectures. Moreover, the success of the 1-ring GNNs suggests that local feature aggregation is a useful inductive bias, which has to be learned when the receptive field is the whole graph, leading to slow and sensitive training.
In this paper, we expand receptive fields from 1-ring neighborhoods to r-ring neighborhoods, where r ranges from 1 (typical GNNs) to R, the diameter of the graph (fully-connected). That is, we augment a graph with edges between each node and all others within distance r in the input topology. We show that performance is significantly improved using fairly small r and carefully-chosen positional encodings annotating this augmented graph. This simple but effective approach can be combined with any GNN.
Contributions. We apply GNN architectures to augmented graphs connecting vertices to their peers of distance ≤ r. Our contributions are as follows: (i) We increase receptive fields using a modified graph with positional encodings as edge and node features. (ii) We compare r-hop positional encodings on the augmented graph, specifically lengths of shortest paths, spectral computations, and
powers of the graph adjacency matrix. (iii) We demonstrate that relatively small r-hop neighborhoods sufficiently increase performance across models and that performance degrades in the fullyconnected setting.
2 RELATED WORK
The Transformer has permeated deep learning (Vaswani et al., 2017), with state-of-the-art performance in NLP (Devlin et al., 2018), vision (Parmar et al., 2018), and genomics (Zaheer et al., 2020). Its core components include multi-head attention, an expanded receptive field, positional encodings, and a CLS-token (virtual global source and sink nodes). Several works adapt these constructions to GNNs. For example, the Graph Attention Network (GAT) performs attention over the neighborhood of each node, but does not generalize multi-head attention using positional encodings (Veličković et al., 2018). Recent works use Laplacian spectra, node degrees, and shortest-path lengths as positional encodings to expand attention to all nodes (Kreuzer et al., 2021; Dwivedi and Bresson, 2021; Rong et al., 2020; Ying et al., 2021). Several works also adapt attention mechanisms to GNNs (Yun et al., 2019; Cai and Lam, 2019; Hu et al., 2020; Baek et al., 2021; Veličković et al., 2018; Wang et al., 2021b; Zhang et al., 2020; Shi et al., 2021).
Path and distance information has been incorporated into GNNs more generally. Yang et al. (2019) introduce the Shortest Path Graph Attention Network (SPAGAN), whose layers incorporate pathbased attention via shortest paths between a center node and distant neighbors, using an involved hierarchical path aggregation method to aggregate a feature for each node. Like us, SPAGAN introduces the ≤ k-hop neighbors around the center node as a hyperparameter; their model, however, has hyperparameters controlling path sampling. Beyond SPAGAN, Chen et al. (2019) concatenate node features, edge features, distances, and ring flags to compute attention probabilities. Li et al. (2020) show that distance encodings (i.e., one-hot feature of distance as an extra node attribute) obtain more expressive power than the 1-Weisfeiler-Lehman test. Graph-BERT introduces multiple positional encodings to apply Transformers to graphs and operates on sampled subgraphs to handle large graphs (Zhang et al., 2020). Yang et al. (2019) introduce the Graph Transformer Network (GTN) for learning a new graph structure, which identifies “meta-paths” and multi-hop connections to learn node representations. Wang et al. (2021a) introduce Multi-hop Attention Graph Neural Network (MAGNA) that uses diffusion to extend attention to multi-hop connections. Frankel et al. (2021) extend GAT attention to a stochastically-sampled neighborhood of neighbors within 5-hops of the central node. Isufi et al. (2020) introduce EdgeNets, which enable flexible multi-hop diffusion. Luan et al. (2019) generalizes spectral graph convolution and GCN in block Krylov subspace forms.
Each layer of our GNN attends to the r-hop neighborhood around each node. Unlike SPAGAN and Graph-BERT, our method is model agnostic and does not perform sampling, avoiding their sampling-ratio and number-of-iterations hyperparameters. Unlike GTN, we do not restrict to a particular graph structure. Broadly, our approach does not require architecture or optimization changes. Thus, our work also joins a trend of decoupling the input graph from the graph used for information propagation (Veličković, 2022). For scalability, Hamilton et al. (2017) sample from a node’s local neighborhood to generate embeddings and aggregate features, while Zhang et al. (2018) sample to deal with topological noise. Rossi et al. (2020) introduce Scalable Inception Graph Neural Networks (SIGN), which avoid sampling by precomputing convolutional filters. Kipf and Welling (2017) preprocess diffusion on graphs for efficient training. Topping et al. (2021) use graph curvature to rewire graphs and combat over-squashing and bottlenecks.
In contrast, our work does not use diffusion, curvature, or sampling, but expands receptive fields via Transformer-inspired positional encodings. In this sense, we avoid the inductive biases from pre-defined notions of diffusion and curvature, and since we do not remove connectivity, injective lossless changes are easy to obtain.
3 PRELIMINARIES AND DESIGN
Let G = (V,E, fv, fe) denote a graph with nodes V ⊂ N0 and edges E ⊆ V × V , and let G be the set of graphs. For each graph, let functions fv ∶ V → Rdv and fe ∶ E → Rde denote node and edge features, respectively. We consider learning on graphs, specially node classification and graph classification. At inference, the input is a graph G. For node classification, the task is to predict
a node label lv(v) ∈ R for each vertex v ∈ V . Using the node labels, the homophily of a graph is defined as the fraction of edges that connect nodes with the same labels (Ma et al., 2022). For graph classification, the task is to predict a label lG ∈ R for the entire graph G. Given the tasks above, GNN architectures typically ingest a graph G = (V,E, fv, fe) and output either a label or a per-node feature. One can view these as an abstraction; e.g. a GNN for graph classification is a map Fθ ∶ G → Rn with learnable parameters θ. These architectures vary in terms of how they implement Fθ. Some key examples include the following: (i) Spatial models (Kipf and Welling, 2017) use the graph directly, computing node representations in each layer by aggregating representations of a node and its neighbors (1-ring). (ii) Spectral models (Bruna et al., 2014) use the eigendecomposition of the graph Laplacian to perform spectral convolution. (iii) Diffusion models (Wang et al., 2021a; Klicpera et al., 2019) use weighted sums of powers of the adjacency matrix to incorporate larger neighborhoods (r-hops). (iv) In Transformers (Kreuzer et al., 2021; Dwivedi and Bresson, 2021; Rong et al., 2020; Ying et al., 2021), each node forms a new representation by self-attention over the complete graph (R-hop neighborhood) using positional encodings. These approaches incorporate useful inductive biases while remaining flexible enough to learn from data.
Spatial models have been extremely successful, but recent work shows that they struggle with underreaching and over-squashing (Alon and Yahav, 2021). Spectral approaches share similar convolutional bias as spatial models and face related problems (Kipf and Welling, 2017). On the other hand, Transformers with complete attention and diffusion aim to alleviate the shortcomings of spatial models and show promising results. Due to complete attention, Transformers carry little inductive bias but are also computationally expensive. Diffusion explicitly incorporates the inductive bias that distant nodes should be weighted less in message aggregation; limiting its breadth of applicability.
We alleviate under-reaching and over-squashing while avoiding the computational load of complete attention by incorporating a more general proximity bias than diffusion without committing to a specific model. Our method is built on the observation that Fθ can be trained to ingest modified versions of the original graph that better communicate structure and connectivity. Hence, we add new edges, nodes, and features to the input graph. To still convey the original topology of the input graph, we add positional encodings. More formally, we design functions g ∶ G → G that modify graphs and give features to the new nodes and edges. These functions can be prepended to any GNN Fθ ∶ G → Rn as Fθ ○ g ∶ G → Rn. The following are desiderata informing our design of g: (i) ability to capture the original graph, (ii) ability to incorporate long-range connections, (iii) computational efficiency, and (iv) minimal and flexible locality bias. By using positional encodings and maintaining the original graph G as a subgraph of the modified graph, we capture the original graph in our modified input (Section 4.2.1). By expanding the receptive field around each node to r-hop neighborhoods we reduce computational load relative to complete-graph attention, with limited inductive bias stemming from proximity. Additionally, expanded receptive fields alleviate under-reaching and over-squashing (Section 6.1).
4 APPROACH
We modify graphs before inputting them to a downstream GNN model, instead of modifying the model itself. Our approach does not remove edges or nodes in the original graph but only adds elements. Given input G = (V,E, fv, fe), we create a new graph G′ = (V ′,E′, f ′v, f ′e) such that G is a subgraph of G′. Expanded receptive fields are achieved in G′ by adding edges decorated with positional encodings as node or edge attributes; we also add a fully-connected CLS node. G′ is still a graph with node and edge attributes to which we may apply any GNN. This process is represented by a function g ∶ G → G. We decompose the construction of g into topological rewiring and positional encoding, detailed below. In a slight abuse of notation, we will subsequently use G to denote only the subset of graphs relevant to a given machine learning problem. For example, for graph regression on molecules, G denotes molecule graphs, with atoms as nodes and bonds as edges.
4.1 TOPOLOGICAL REWIRING
We modify the input graph G to generate G′ in two steps:
Expanded receptive field. Given a graph G = (V,E, fv, fe) ∈ G and a positive integer r ∈ N+, we add edges between all nodes within r hops of each other in G to create G′r = (V,E′, f ′v, f ′e). If G is annotated with edge features, we assign to each edge in E′/E an appropriate constant feature Ce. CLS node. Following Gilmer et al. (2017), we also include a “CLS”—or classification—node to our graph connected to all others. We follow this procedure: Given a graph G, we (i) initialize a new graph G′ = (V ′,E′, f ′v, f ′e) = G, (ii) add a new node vCLS to V ′, and (iii) set f ′v(vCLS) ∶= Cv for a constant Cv . Finally, we set E′ ∶= E ∪ ⋃v∈V {(vCLS, v), (v, vCLS)}, with f ′e((vCLS, v)) = f ′e((v, vCLS)) ∶= Ce, where Ce is defined above.
4.2 POSITIONAL ENCODINGS
Given only the connectivity of a rewired graph G′r = (V ′,E′, f ′v, f ′e) from the two-step procedure above, it may not be possible to recover the connectivity of the original graph G = (V,E, fv, fe). In the extreme, when r is large and G is connected, G′r could become fully-connected, meaning that all topology is lost—removing the central cue for graph-based learning. To combat this, we encode the original topology of G into G′r via positional encodings, which are node and/or edge features. We consider several positional encoding functions for edges pe ∶ G × V ′ × V ′ → Rn or nodes pv ∶ G × V ′ → Rn, appending the output of pe as edge or pv as node features to G′r. Section 4.2.1 lays out properties to compare choices of pe and/or pv . Then, Section 4.2.2 provides concrete positional encodings compared in our experiments that trade off between the properties we lay out.
4.2.1 PROPERTIES OF POSITIONAL ENCODINGS
There are countless ways to encode the subgraph topology of G within G′ in vertex features pv or edge features pe. Below, we state a few properties we can check to give a framework for comparing the capabilities and biases of possible choices.
Lossless encoding. While a GNN can ignore information in input G′, it cannot reconstruct information that has been lost in constructing G′ from G. Yet, there can be benefits in forgetting information, e.g. when dealing with noisy graphs or incorporating a stronger inductive bias (Rossi et al., 2020; Klicpera et al., 2019). That said, a simple property to check for G′ equipped with positional encoding features pe, pv is whether we can recover G from this information, that is, whether our encoding is lossless (or non-invasive). As long as it is possible to identify G within g(G), g is an injection and non-invasive. Hence, a sufficient condition for lossless positional encodings is as follows: If all edges in G′ have unique positional encodings, then g ∶ G → G is a bijection. One way to achieve this condition is to use an additional edge feature that is unique to the 1-ring.
Discriminative power. Following work investigating the discriminative power of GNNs (Xu et al., 2019; Brüel Gabrielsson, 2020), Ying et al. (2021) showed that expanded receptive fields together with shortest-path positional encodings are strictly more powerful than the 1-Weisfeiler-Lehman (WL) test and hence more powerful than 1-hop vanilla spatial GNN models (Xu et al., 2019). The combination of increased receptive fields, positional encodings, and choice of subsequent GNN models determines discriminative power. In fact, it follows from (Ying et al., 2021) that the positional encodings presented below together with an increased receptive field r > 1 and a vanilla spatial GNN model are strictly more powerful than the 1-WL test.
Computational time. Positional encodings may come at substantial computational cost when working with r-hop neighborhoods. The cost of computing positional encodings affects total inference time, which may be relevant in some learning settings. However, in our setting the runtime of computing positional encodings is an order of magnitude less than the subsequent inference time, and in our implementation the asymptotic runtimes of computing the positional encodings are the same. See Appendix E.
Local vs. global. The positional encoding of a vertex or edge can be local, meaning it incorporates information from a limited-sized neighborhood in G, or global, in which case adding or removing a node anywhere in G could affect all the positional encodings.
Inductive bias. Our positional encodings can bias the results of the learning procedure, effectively communicating to the downstream GNN which properties of G and G′ are particularly important for learning. Without positional encodings, our model would induce a bias stating that distances < r in our graph are insignificant. More subtly, suppose ℓ is the distance (of length ≤ r) between two
nodes in G corresponding to a new edge in E′. Using ℓ directly as positional encoding rather than a decaying function, e.g. e−αℓ, makes it easier or harder (resp.) to distinguish long distances in G.
A related consideration involves whether our model can imitate the inductive bias of past work. For example, graph diffusion has been used to incorporate multi-hop connections into GNNs using fixed weights (Wang et al., 2021a). We can ask whether our positional encodings on G′ are sufficient to learn to imitate the behavior of a prescribed multi-hop model on G, e.g. whether a layer of our GNN applied to G′ can capture multi-hop diffusion along G.
Over-squashing and under-reaching. Section 6.1 demonstrates, via the NeighborsMatch problem (Alon and Yahav, 2021), that increased receptive fields as well as the CLS-node alleviate oversquashing; however, this toy problem is concerned with matching node attributes and not with graph topology. We want positional encodings that alleviate over-squashing in the sense that it enables effective information propagation for the task at hand. Our experiments showing that expanded receptive fields alleviate the over-squashing problem and that the best performing positional encoding varies across datasets showcase this. Additionally, our experiments on the discriminative power of positional encodings in Appendix D further help discern the different options.
4.2.2 POSITIONAL ENCODING OPTIONS
Taking the properties above into consideration, we now give a few options for positional encodings below, compared empirically in Section 6.
Shortest path. For any edge e ∈ G′r, the shortest-path positional encoding takes pe ∈ {0,1, . . . , r} to be the integer length of the shortest path in G between the corresponding nodes of E. These embeddings are lossless because G is the subgraph of g(G) with pe = 1. They also are free to compute given our construction of G′r from G. But, multiple vertices in the r-neighborhood of a vertex in V could have the same positional encoding in V ′, and shortest path lengths are insufficient to capture complex inductive biases of multi-hop GNNs like diffusion over large neighborhoods. Shortest-path positional encoding was previously used by Ying et al. (2021), for extending G to a fully-connected graph, but they did not consider smaller r values.
Spectral embedding. Laplacian eigenvectors embed graph vertices into Euclidean space, providing per-vertex features that capture multi-scale graph structure. They are defined by factorizing the graph Laplacian matrix, ∆ = I−D−1/2AD−1/2, where D is the degree matrix and A is the adjacency matrix. We call the result a spectral positional embedding. We can use the q smallest non-trivial Laplacian eigenvectors of G as a node-based positional encoding pv ∶ V ′ → Rq . Following Dwivedi et al. (2020), since these eigenvectors are known only up to a sign, we randomly flip the sign during training. Prior work consider Laplacian eigenvectors as additional node features without topological rewiring (Dwivedi et al., 2020).
Spectral positional encodings do not necessarily make g injective. Even when q = ∣V ∣, this encoding fails to distinguish isospectral graphs (Von Collatz and Sinogowitz, 1957), but these are rarely encountered in practice. On the other hand, spectral signatures are common for graph matching and other tasks. Moreover, unlike the remaining features in this section, spectral positional encodings capture global information about G rather than only r-neighborhoods. Finally, we note that the diffusion equation for graphs can be written as ut = −∆u; this graph PDE can be solved in closed-form given the eigenvectors and eigenvalues of ∆. Hence, given the spectral embedding of G in G′, we can simulate diffusion-based multi-hop GNN architectures up to spectral truncation.
Powers of the adjacency matrix. Our final option for positional encoding generalizes the shortest path encoding and can capture the inductive biases of diffusion-based GNNs. The entry at position (i, j) of the k-th power Ak of the adjacency matrix A of graph G gives the number of paths of length k between node i and j in G. Concatenating the powers from k = 1, . . . , r, we get for each edge e in G′ an integer vector pe ∈ Nr giving the powers of the adjacency matrix positional encoding. This embedding can be used to recover the shortest-path embedding. This adjacency-derived embedding can also generalize the inductive bias of diffusion-based multi-hops GNNs. In particular, diffusion aggregation weights are often approximated using a Taylor series, W = ∑∞i=0 θiAi ≈ ∑ri=0 θiAi ∶= W , where θi are a prescribed decaying sequence (θi > θi+1). The entries of W above can be computed linearly from the adjacency-powers positional encoding. Hence, it is strictly more general than using prescribed diffusion-based aggregation weights on G.
Lossless encodings. The previously discussed lossless-encoding properties of our graph rewiring method are accomplished by two of the above-mentioned positional encodings:
Proposition 1. Shortest-path and adjacency matrix positional encodings yield lossless rewirings.
Proof. Recovering the original graph G = (V,E) from the rewired graph G′ = (V,E′) is almost trivial. With the shortest-path position encoding the original graph can be recovered via E = {e∣e ∈ E′, pe = 1} and for powers-of-the-adjacency-matrix encodings via E = {e∣e ∈ E′, (pe)1 = 1}.
5 IMPLEMENTATION DETAILS
Our method is compatible with most GNN architectures. Here we adopt GatedGCN (Bresson and Laurent, 2018), MoNet (Monti et al., 2017), and an implementation of the Transformer (Vaswani et al., 2017); see Appendix B for details. For each model, we consider graph rewiring with a different r-hop receptive field around each node, and compare with and without the CLS-node, as well as the three positional encodings introduced in Section 4.2.2.
Input and readout layers. Typically, GNNs on a graph G = (V,E, fv, fe) first embed node features fv and edge features fe through a small feed-forward network (FFN) input layer. When incorporating positional encodings per edge/node, we embed using a small FFN and add them at this input layer. After this layer, it updates node and edge representations through successive applications of GNN layers. Lastly, a readout layer is applied to the last GNN layer L. For node classification, it is typically a FFN applied to each node feature hLi . For graph classification, it is typically an FFN applied to the mean or sum aggregation of all node features hL. For graph classification and when using the CLS-node, we aggregate by applying the FFN to the CLS-node’s features in the last layer.
6 EXPERIMENTS
We evaluate performance on six benchmark graph datasets: ZINC, AQSOL, PATTERN, CLUSTER, MNIST, and CIFAR10 from (Dwivedi et al., 2020). The benchmark includes a training time limit of 12 hours; we use similar compute to their work via a single TeslaV100 GPU. Training also stops if for a certain number of epochs the validation loss does not improve (Dwivedi et al., 2020). Thus, our experiments consider the ease of training and efficient use of compute. For the first two datasets, we run GatedGCN, MoNet, and Transformer to show that rewiring and positional encoding work for different models; for the other datasets we run only GatedGCN to focus on the effects of receptive field size, the CLS node, and positional encodings. For all datasets, we run with increasing receptive fields, with different positional encodings, and with or without the CLS-node. In the tables, density is the average of the densities (defined as the ratio ∣E∣/∣V ∣2) of each graph in the dataset rewired to the respective receptive field size. See Appendix A for details.
Table 1 compares our best results with other top performing methods and models. All our top performing models come from the GatedGCN, although the Transformer performs comparably; however, the Transformer was harder to train—see Appendix B. MoNet performs worse but still sees significant improvements from our approach. Our GatedGCN implementation was taken from the same work (Dwivedi et al., 2020) that introduced the benchmarks and code that we use. Thus, hyperparameters might be better adapted to the GatedGCN. This highlights the benefits of our modelagnostic approach, which allows us to pick the best models from Dwivedi et al. (2020) and combine them with our methods. Our approach with 100K parameters achieves state-of-the-art on all datasets among models with 100K parameters and even outperforms 500K-parameter models.
ZINC, Graph Regression. ZINC consists of molecular graphs and the task is graph property regression for constrained solubility. Each ZINC molecule is represented as a graph of atoms with nodes and bonds as edges. In Table 2 we present results for r from 1 to 10. The density column shows that these graphs are sparse and that the number of edges increases almost linearly as the receptive field r is increased. Performance across all settings noticeably improves when increasing r above 1. Top performance is achieved with the CLS-node and powers-of-the-adjacency positional encoding at r = 4, and at 52% of the edges and compute compared to complete attention. When using the CLS node and/or spectral positional encodings, top performance generally occurs at lower r, which is likely due to the global nature of these changes to the graphs. The GatedGCN and
Transformer perform comparably for the same settings, with a slight edge to the GatedGCN. The two models show the same performance trends between settings, i.e., both increased receptive fields and the CLS-node boost performance. Further, Ying et al. (2021) include a performance of 0.123 on ZINC with their Graphormer(500K), i.e., a Transformer with positional encodings and complete attention. However, their training is capped at 10,000 epochs while ours is capped at 1,000 epochs; training their Graphormer(500K) with same restrictions leads to a score of 0.26 on ZINC.
AQSOL, Graph Regression. AQSOL consists of the same types of molecular graphs as ZINC. The densities of AQSOL graphs are slightly higher than those of ZINC. For all settings not including CLS-node or spectral positional encodings, performance improves significantly when increasing r above 1 (see Table 3); in these settings, better performing r are larger than for ZINC. However, when including CLS node or spectral positional encodings, performance changes much less across different r. This indicates the importance of some form of global bias on this dataset. At least one of larger r values, spectral positional encoding, or the CLS-token is required to provide the global bias, but the effect of them differs slightly across the two models. GatedGCN performs significantly better, and larger r-values still boosts performance when combined with the CLS-token for MoNet, but not for GatedGCN. MoNet uses a Bayesian Gaussian Mixture Model (Dempster et al., 1977) and since MoNet was not constructed with edge-features in mind, we simply add edge embeddings to the attention coefficients. Not surprisingly, this points to the importance of including edge features for optimal use of expanded receptive fields and positional encodings.
CLUSTER, Node Classification. CLUSTER is a node classification dataset generated using a stochastic block model (SBM). The task is to assign a cluster label to each node. There is a total of 6 cluster labels and the average homophily is 0.34. CLUSTER graphs do not have edge features. Table 4 gives results for r-hop neighborhoods from 1 to 3. As can be seen in the density column, at r = 3 all graphs are fully connected, and more than 99% of them are fully connected at r = 2. Hence, these graphs are dense. Significant improvements are achieved by increasing r for all but the spectral positional encoding (again showcasing its global properties), which together with the CLS node perform competitively at r = 1. The CLS node is helpful overall, especially at r = 1. The GatedGCN and Transformer perform comparably for all but the spectral positional encodings where the Transformer breaks down. We found that this breakdown was due to the placement of batch normalization, discussed in Appendix B.1.
PATTERN, Node Classification. The PATTERN dataset is also generated using a SBM model, and has an average homophily of 0.66. The task is to classify the nodes into two communities and graphs have no edge features. Table 5 shows results for r-hops from 1 to 3. Similarly to CLUSTER, the density column shows that the graphs are dense. Significant improvements are achieved by increasing r > 1 and/or using the CLS-node. Performance generally decreases at r = 3. Similarly to CLUSTER, the CLS-node helps at r = 1, but for both CLUSTER and PATTERN, top performing model comes from a larger r > 1 without the CLS-node, suggesting that trade-offs exist between CLS-node and increased receptive fields. Compared to CLUSTER, our approach shows less performance boost for PATTERN, which lead us to hypothesize that our approach is more helpful for graphs with low homophily which we investigate further in Appendix F.
MNIST, Graph Classification. MNIST is an image classification dataset converted into super-pixel graphs, where each node’s feature includes super-pixel coordinates and intensity. The images are of handwritten digits, and the task is to classify the digit. Table 6 summarizes results for r from 1 to 3. Not all graphs are fully connected at r = 3, but training at r = 4 exceeds our memory limit. Noticeable performance gains are achieved at r = 2, but performance generally decreases at r = 3. The CLS-node consistently improves performance at r = 1 but not otherwise, indicating that the CLS-node and increased r-size have subsumed effects.
CIFAR10, Graph Classification. CIFAR10 is an image classification dataset converted into superpixel graphs, where each node’s features are the super-pixel coordinates and intensity. The images consist of ten natural motifs, and the task is to classify the motif, e.g., dog, ship, or airplane. Table 7 provides results for r from 1 to 3. Not all graphs are fully connected at r = 3, but training at r = 4 led to out-of-memory issues. Top performing versions are all at r = 1, and performance degrades for r > 1. As with MNIST, the CLS-node only improves performance at r = 1, again indicating its shared (subsumed) effects with increased r-sizes.
6.1 NEIGHBORSMATCH, OVER-SQUASHING
Alon and Yahav (2021) introduce a toy problem called NeighborsMatch to benchmark the extent of over-squashing in GNNs, while controlling over-squashing by limiting the problem radius rp. The graphs in the dataset are binary trees of depth equal to the problem radius rp. Thus, the graphs are
structured and sparse, and the number of edges grows linearly with the increased receptive field r. See Figure 1, Appendix C, for results with GatedGCN. Increasing the receptive field r with a step of 1 increases the attainable problem radius with a step of 1, while using the CLS-node at r = 1 falls in between the performance of r = 2 and r = 3 but with a much longer tail. Thus, this further showcases the subsumed as well as different effect (complementary and conflicting) the receptive field and the CLS-node have, as also observed on the other benchmarks.
6.2 COMPUTATIONAL ANALYSIS
For all positional encodings, the number of edges determines the asymptotic runtime and memory use. The CLS-node only introduces an additive factor. Figures 4 and 5 in Appendix E show that the runtime in practice scales roughly the same as the density, as the receptive field size is increased; though real runtime has a significant constant factor.
6.3 SELECTING POSITIONAL ENCODING AND HOPS SIZE
We recommend the adjacency positional encodings together with the CLS-node. In terms of ranked performance across the 6 datasets, adjacency- and spectral positional encodings perform the same, but the spectral encoding performs considerably worse on the ZINC dataset, while the differences are smaller on the other datasets. Additional experiments in Appendix D, Figure 2, assess the discriminative power of the different encodings. However, there is no positional encoding superior in all aspects. Instead, each one has unique benefits as well as drawbacks. This is made apparent by considering r as a parameter and observing the performance differences across values of r. Furthermore, the CLS-node is part of the best-performing configuration more often than not. Similarly, no fixed r is optimal for all datasets. Instead, optimal r depends on the dataset and the amount of compute. Appendix F shows that increased r diminishes the reliance on homophily as an inductive bias, and thus low homophily of a dataset could be used as an indicator for selecting an increased r. If the density does not change much from a change in r then neither does performance. The use of the spectral positional encodings, the CLS-node, or increased r have subsuming effects for multiple datasets; here the CLS-node or spectral positional encodings may be preferred, computationally cheaper, alternatives to increasing r.
From this empirical study, for picking optimal r, we recommend computing the densities for increasing r and picking the first one where the average density exceeds 0.5 to reap most of the performance boosts. This seems to maintain a helpful locality bias as well as to significantly reduce the compute compared to complete attention. See Appendix G for further discussion.
7 DISCUSSION
Our simple graph rewiring and positional encodings achieve state-of-the-art performance, widening receptive fields while alleviating over-squashing. This is much due to the ability to easily apply our method to models that stem from a large body of work on GNNs, highlighting the benefits of our model-agnostic approach.
The reality is that attention with complete receptive fields is still computationally intractable for most practitioners and researchers. However, here we show that significant performance boosts via attention and increased receptive fields can be obtained by increasing the receptive field only slightly. Thus, opening up recent work to a broader range of practitioners as well as giving more fair conditions for comparing GNNs. In addition, the systematic investigation of increased receptive fields and positional encodings gives further insights into the necessity of homophily for the success of GNNs and highlights other implicit biases in GNN architectures.
A TRAINING DETAILS
Both code and training follow Dwivedi et al. (2020) closely, and to a lesser extent (Dwivedi and Bresson, 2021), which uses the same code base.
Like (Dwivedi et al., 2020), we use the Adam optimizer (Kingma and Ba, 2015) with the same learning rate decay strategy. The initial learning rate is set to 10−3 and is reduced by half if the validation loss does not improve after a fixed (”lr schedule patience”) number of epochs, either 5 or 10. Instead of setting a maximum number of epochs, the training is stopped either when the learning rate has reached 10−6 or when the computational time reaches 12 hours (6 hours for NeighborsMatch). Experiments are run with 4 different seeds; we report summary statistics from the 4 results.
Below we include training settings for the different datasets.
A.1 ZINC
"model": GatedGCN and Transformer, "batch_size": 128, "lr_schedule_patience": 10, "max_time": 12
A.2 AQSOL
"model": GatedGCN and MoNet, "batch_size": 128, "lr_schedule_patience": 10, "max_time": 12
A.3 CLUSTER
"model": GatedGCN and Transformer, "batch_size": 48 (GatedGCN), 32 or 16 (Transformer), "lr_schedule_patience": 5, "max_time": 12
A.4 PATTERN
"model": GatedGCN, "batch_size": 48, "lr_schedule_patience": 5, "max_time": 12
A.5 MNIST
"model": GatedGCN, "batch_size": 128, "lr_schedule_patience": 10, "max_time": 12
A.6 CIFAR10
"model": GatedGCN, "batch_size": 128, "lr_schedule_patience": 10, "max_time": 12
A.7 NEIGHBORSMATCH
"model": GatedGCN,
"batch_size": 256, "lr_schedule_patience": 10, "max_time": 6
B TRANSFORMER IMPLEMENTATION
We implemented a simple version of the Transformer adapted to graphs:
ĥli =BN(hl−1i ) ˆ̂ hli = ∥Hk=1 ( ∑
j∈Ni∪{i} al,ki,jW l kĥ l−1 j ) + hl−1i
hli =FFN(BN( ˆ̂ hli)) + ˆ̂ hli
with êli,j =BN(el−1i,j ) âl,ki,j =((A l kĥ l i)T (Blkĥlj) +Clkêli,j)/d
al,ki,j = exp(âl,ki,j )
∑j′∈Ni∪{i} exp(â l,k i,j′)
eli,j =FFN(êli,j) + el−1i,j
Here, h and e are node and edge features (resp.) from the previous layer. Wk,A,B ∈ Rd/H×d and C ∈ R1×d are learnable weight-matrices, H is the number of attention heads, and BN is short for batch normalization. ∥Hk=1 denotes the concatenation of the attention heads.
B.1 DESIGN CHOICES AND CHALLENGES
There are many variations on the Transformer model. Following Ying et al. (2021), we put the normalization before the multi-head attention, which caused instability when training on CLUSTER with Laplacian (spectral) positional encodings. This was fixed by putting the normalization after or using layer normalization instead of batch normalization; however, these changes reduced performance on ZINC. While the GatedGCN worked well with identical architecture parameters across datasets, we found that the Transformer needed more variations to stay competitive on MNIST and CIFAR10; in particular, fewer layers and larger hidden dimensions.
Transformers use multi-head attention which puts number-of-heads dimension vectors on each edge—seen as directed. Hence, the memory load becomes 2× ∣E∣× num heads (in our experiments, num heads = 6), which compared for GatedGCN is only 2 × ∣E∣. This causes a memory bottleneck for the Transformer that may force one to use a reduced batch size to avoid memory issues.
B.2 OTHER VARIANTS
We implemented other variants, including more involved Transformers. As in (Vaswani et al., 2017), we ran the path-integers through sine and cosine functions of different frequencies, and inspired by (Dai et al., 2019; Ke et al., 2020) we implemented a more involved incorporation of relative positions in the multi-head attention (see below); however, we found performance to be comparable.
In natural language processing, the input is a sequence (a line graph) x = (x1, . . . , xn) of text tokens from a vocabulary set V , with each token having a one-hot-encoding fV ∶ V → [0,1]∣V ∣. The word embeddings E ∈ Rn×d for n tokens are formed as E = (WembedfV(xi) ∣ xi ∈ x) where Wembed ∈ Rd×∣V ∣ is a learnable weight matrix. The original Transformer model used absolute positional encodings. This means that we add the positional encoding to the node embedding at the input layer. Consider a positional encoding function pe ∶ N0 → Rd. Then the first input is
h0 = (WembedfV(xi) + pe(i) ∣ i = 1, . . . , n) = E +U
where U = (pe(i) ∣ i = 0, . . . n) ∈ Rn×d. Typically pe contains sine and cosine functions of different frequencies:
pe(k,2 × l) = sin(k/10000(2×l)/d) pe(k,2 × l + 1) = cos(k/10000(2×l+1)/d)
where k ∈ N is the position and l ∈ N is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from 2π to 10000 × 2π. This function was chosen because it was hypothesized that it would allow the model to easily learn to attend by relative positions, since for any fixed offset m, pe(k +m) is a linear function of pe(k). It was also hypothesized that it may allow the model to extrapolate to sequence lengths longer than the ones encountered during training.
In many cases, absolute positional encodings have been replaced with relative fully learnable positional encodings and relative partially learnable positional encodings (Dai et al., 2019). To justify these, consider the first attention layer with absolute positional encodings:
Aabsi,j = ExiWqWTk ETxj +ExiWqW T k U T j +UiWqWTk ETxj +UiWqW T k U T j
For relative (fully and partially) learnable positional encodings we have instead:
Areli,j = ExiWqWTk,EETxj +ExiWqW T k,RR T i−j + uWTk,EETxj + vW T k,RR T i−j
where u, v ∈ R1×d are learnable weights and Ri−j ∈ R1×d is a relative positional encoding between i and j. Each term has the following intuitive meaning: term (1) represents content-based addressing, term (2) captures a content-dependent positional bias, term (3) governs a global content bias, and (4) encodes a global positional bias.
For relative fully learnable positional encodings, WTk,RR T i−j is a learnable weight in Rd×1 for each i− j ∈ N, while for relative partially learnable positional encodings Ri,j = pe(∣i− j∣) where pe is the sinusoidal function from before.
We implemented both fully and partially learnable positional encodings for the shortest-path positional encodings (integer-valued) and related versions for the other positional encodings (in Rd). We include results in Tables 8 and 9.
C OVER-SQUASHING
Results for over-squashing experiment can be found in Figure 1.
D ADDITIONAL EVALUATION OF POSITIONAL ENCODINGS
Here we provide a start to toy data and a task for comparing positional encodings. In this task we wish to assess how powerful the positional encodings are in practice, i.e. how well they discriminate between different graph isomorphism classes. Specifically, we generate 100 random Erdos graphs and then expand the receptive field so that the graph is fully connected. Thus, the positional encodings become the mere instrument for communicating the connectivity/topology of the original graph. The task is to retrieve a specific graph among all 100 graphs, i.e. the task is graph classification and there is a 100 classes. Hence, achieving 100% accuracy means that the GNN, based on the positional encodings, has been able to discriminate between all graphs. We only look at train accuracy here, since we’re interested in the power to overfit, not to generalize. Results can be found in Figure 2.
All positional encodings are able to solve that task after a sufficient amount of training, besides Adj10. Adj-5 and Adj-10 encode the adjacency matrix to the power of 5 and 10 respectively (at both points all graphs are fully connected). Adj-10 encodes between any two nodes the number of paths of length 10, number of path of length 9, and so on. The experiments indicate that too much such information confuses the GNN and makes it harder to discriminate between graphs. The shortest and Adj-5 positional encodings are the fastest at solving the task. This can be due to the fact that the Laplacian positional encoding is only unique up to a sign and that we randomly switch the sign during training.
E COMPUTATIONAL RUNTIME AND MEMORY USE
In our implementation, the step of computing the positional encodings as well as expanding the rhops of the graph is done in the same process for shortest-path and adjacency positional encodings; thus this step always occur and we found that implementing it via iterative matrix multiplications of the adjacency matrix gave the fastest results. How this scales with the r-size can be found in Figure 3. Since each increment of the r-size results in an additional matrix multiplication, the linear increase is expected. The spectral positional encoding has the same additive runtime per graph across r-sizes of 1.3 × 10−3 seconds. These matrix multiplications are done on CPU rather than GPUs, but running them on GPUs could results in speed-ups. However, the runtime for computing these positional encodings is at least an order of magnitude smaller (per graph) than the runtime for running the subsequent GNN on a GPU, so there was no need to optimize this runtime further.
In Figures 4 and 5 we include actual runtime of the GNN (on GPU) of different positional encodings and hops sizes, juxtaposed with the density of the modified graphs, for the ZINC and CIFAR10 datasets. Note, we are here excluding the computation of the positional encoding on the input graph, which can be found in Figure 3.
Most graphs to which GNNs are applied to are connected and typically the number of edges are greater than the number of nodes, i.e. ∣E∣ ≥ ∣V ∣. Since all established GNNs make use of the edges in one way or another, the number of edges usually determines the asymptotic behavior of the runtime and memory use, i.e. they are in O(∣E∣). With modern deep learning and specialized graph learning framework, GPU-parallelization and other more technical aspect affect memory and runtime. Thus, Figures 4 and 5 compare theoretical runtime (dominated by the density) with actual runtime of code run on GPUs. We find that density and actual runtime is strongly correlated. In Figure 6 we include the memory use for increasing radius on ZINC dataset, and find its roughly linear with the density as well.
F HOMOPHILY SCORE AND PERFORMANCE
We include experiments to investigate the correlation between homophily score (Ma et al., 2021) and performance when increasing hops size. This applies to the node classification datasets, CLUSTER and PATTERN, that we used. We split the test set into three buckets, which is just a sorted segmentation of the graphs with increasing homology scores. We evaluate trained Gated-GCN models with adjacency positional encodings for r-values 1 and 2 (at r = 2 almost all graphs are fully connected). See Tables 10 and 11 for results.
We find that high homophily score correlates much stronger with performance when r = 1 than it does at r = 2. This indicates that increased r-size diminishes the reliance on homophily as an inductive bias.
G OPTIMAL POSITIONAL ENCODING AND HOPS SIZE
Again, we recommend the adjacency positional encodings together with the CLS-node. We find that in terms of ranked performance on the 6 datasets, adjacency- and spectral positional encodings perform at the same level, but the spectral encoding perform considerably worse on the ZINC datasets, while the differences are smaller on the other datasets. The spectral encoding hardcode global-to-local information on the nodes and the size of the encoding-vector is a hyper parameter; we found performance not to be too sensitive to this hyper-parameter but future work could further investigate this. Spectral embeddings also use less memory as it does not encode its embeddings as edge-features; however, since information still is propagated along edges we find this memory saving to be significant but not asymptotically different. Adjacency encoding breaks down faster as the r-size is increased compare to the other positional encodings, we believe this to be due to the corresponding increase in size of the embedding-vectors and its introducing low-signal information that is also easy to overfit to, e.g. the number of paths of length 10 between two nodes (where any edge can be used multiple times). The Erdos experiments in Appendix D support this observation. However, all in all, the adjacency encoding stands out slightly considering the performance, runtime, memory use, and toy experiments. Furthermore, the CLS-node is part of the best performing configuration more times than it is not, and it has the additional advantage of leading to peak performance at lower r-sizes where in some cases it also has reduced runtime and memory use compared instead to increasing the r-size.
In this work we do not find a fixed r-size that is optimal for all datasets. The optimal r depends on the dataset and the amount of compute available. Given the fixed amount of compute used in our experiments, we found that all the best performance was found at r-size four or smaller. We provide heuristic for selecting a good r-size but ultimately it depends on the amount of compute and memory available. | 1. What is the focus and contribution of the paper on graph neural networks?
2. What are the strengths of the proposed approach, particularly in terms of its design idea and logic?
3. What are the weaknesses of the paper, especially regarding innovation and originality?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any suggestions for improving the experiment setup and baseline comparisons? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a method to augment the input graph with additional nodes/edges and use positional encodings as the node and/or edge features, expanding receptive fields from 1-ring neighborhoods to r-ring neighborhoods. Empirical experiments show that relatively small r-hop neighborhoods sufficiently increase performance across models and that performance degrades in the fully connected setting.
Strengths And Weaknesses
Strength
The intrinsic design idea and logic of this framework are straightforward and clear, and with sufficient analysis and summary of related works, derivation, and proof, the rationality and validity of the framework are verified theoretically.
The experimental settings are very detailed, and the experimental results can demonstrate the effectiveness and superiority of this method.
Weaknesses
In my opinion, the biggest problem in your framework is the lack of innovation. The positional coding method and virtual node strategy used in your method are existing methods in the existing work. It can be concluded that your work is to expand the perception threshold from 1-hop neighbor to r-hop neighbor and analyze its effect, which lacks innovation and originality.
Though your experiment setup is very complete and the experiment data is very informative, your experiment dataset is kind of small compared to the real-world ones. Try to evaluate it on a larger dataset. Also, why not have some well-known position-encoding based GNNs as your baseline? Such as P-GNN (Position-aware graph neural networks, ICML 2019.) and Graphormer (Do transformers really perform badly for graph representation, NIPS 2021.) There is also a recent work that incorporates position encodings for graph rewiring (Position-aware Structure Learning for Graph Topology-imbalance by Relieving Under-reaching and Over-squashing, CIKM 2022), which also should be included in your baselines.
In the main text part, it is best to use a diagram to explain your method design. In addition, in the experimental section of the main text, more interesting experimental results should be emphasized and further analysis should be given. Moreover, a large part of the main text introduces or summarizes other works, and the original work is relatively few.
Clarity, Quality, Novelty And Reproducibility
This framework is of average quality and kind of lacking innovation. The description of this framework is not very clear and the theoretical analysis is sufficient. In the experimental part, the validity analysis of the results is relatively redundant. The originality of the work is marginally below the average level. |
ICLR | Title
Rewiring with Positional Encodings for GNNs
Abstract
Several recent works use positional encodings to extend the receptive fields of graph neural network (GNN) layers equipped with attention mechanisms. These techniques, however, extend receptive fields to the complete graph, at substantial computational cost and risking a change in the inductive biases of conventional GNNs, or require complex architecture adjustments. As a conservative alternative, we use positional encodings to expand receptive fields to r-hop neighborhoods. More specifically, our method augments the input graph with additional nodes/edges and uses positional encodings as node and/or edge features. We thus modify graphs before inputting them to a downstream GNN model, instead of modifying the model itself. This makes our method model-agnostic, i.e. compatible with any existing GNN architectures. We also provide examples of positional encodings that are lossless with a one-to-one map between the original and the modified graphs. We demonstrate that extending receptive fields via positional encodings and a virtual fully-connected node significantly improves GNN performance and alleviates over-squashing using small r. We obtain improvements on a variety of models and datasets, and reach state-of-the-art performance using traditional GNNs or graph Transformers.
1 INTRODUCTION
GNN layers typically embed each node of a graph as a function of its neighbors’ (1-ring’s) embeddings from the previous layer; that is, the receptive field of each node is its 1-hop neighborhood. Hence, at least r stacked GNN layers are needed for nodes to get information about their r-hop neighborhoods. Barceló et al. (2020) and Alon and Yahav (2021) identify two broad limitations associated with this structure: under-reaching occurs when the number of layers is insufficient to communicate information between distant vertices, while over-squashing occurs when certain edges act as bottlenecks for information flow.
Inspired by the success of the Transformer in natural language processing (Vaswani et al., 2017), recent methods expand node receptive fields to the whole graph (Dwivedi and Bresson, 2021; Ying et al., 2021). Since they effectively replace the topology of the graph with that of a complete graph, these works propose positional encodings that communicate the connectivity of the input graph as node or edge features. As these methods operate on fully-connected graphs, the computational cost of each layer is quadratic in the number of nodes, obliterating the sparsity afforded by conventional 1-ring based architectures. Moreover, the success of the 1-ring GNNs suggests that local feature aggregation is a useful inductive bias, which has to be learned when the receptive field is the whole graph, leading to slow and sensitive training.
In this paper, we expand receptive fields from 1-ring neighborhoods to r-ring neighborhoods, where r ranges from 1 (typical GNNs) to R, the diameter of the graph (fully-connected). That is, we augment a graph with edges between each node and all others within distance r in the input topology. We show that performance is significantly improved using fairly small r and carefully-chosen positional encodings annotating this augmented graph. This simple but effective approach can be combined with any GNN.
Contributions. We apply GNN architectures to augmented graphs connecting vertices to their peers of distance ≤ r. Our contributions are as follows: (i) We increase receptive fields using a modified graph with positional encodings as edge and node features. (ii) We compare r-hop positional encodings on the augmented graph, specifically lengths of shortest paths, spectral computations, and
powers of the graph adjacency matrix. (iii) We demonstrate that relatively small r-hop neighborhoods sufficiently increase performance across models and that performance degrades in the fullyconnected setting.
2 RELATED WORK
The Transformer has permeated deep learning (Vaswani et al., 2017), with state-of-the-art performance in NLP (Devlin et al., 2018), vision (Parmar et al., 2018), and genomics (Zaheer et al., 2020). Its core components include multi-head attention, an expanded receptive field, positional encodings, and a CLS-token (virtual global source and sink nodes). Several works adapt these constructions to GNNs. For example, the Graph Attention Network (GAT) performs attention over the neighborhood of each node, but does not generalize multi-head attention using positional encodings (Veličković et al., 2018). Recent works use Laplacian spectra, node degrees, and shortest-path lengths as positional encodings to expand attention to all nodes (Kreuzer et al., 2021; Dwivedi and Bresson, 2021; Rong et al., 2020; Ying et al., 2021). Several works also adapt attention mechanisms to GNNs (Yun et al., 2019; Cai and Lam, 2019; Hu et al., 2020; Baek et al., 2021; Veličković et al., 2018; Wang et al., 2021b; Zhang et al., 2020; Shi et al., 2021).
Path and distance information has been incorporated into GNNs more generally. Yang et al. (2019) introduce the Shortest Path Graph Attention Network (SPAGAN), whose layers incorporate pathbased attention via shortest paths between a center node and distant neighbors, using an involved hierarchical path aggregation method to aggregate a feature for each node. Like us, SPAGAN introduces the ≤ k-hop neighbors around the center node as a hyperparameter; their model, however, has hyperparameters controlling path sampling. Beyond SPAGAN, Chen et al. (2019) concatenate node features, edge features, distances, and ring flags to compute attention probabilities. Li et al. (2020) show that distance encodings (i.e., one-hot feature of distance as an extra node attribute) obtain more expressive power than the 1-Weisfeiler-Lehman test. Graph-BERT introduces multiple positional encodings to apply Transformers to graphs and operates on sampled subgraphs to handle large graphs (Zhang et al., 2020). Yang et al. (2019) introduce the Graph Transformer Network (GTN) for learning a new graph structure, which identifies “meta-paths” and multi-hop connections to learn node representations. Wang et al. (2021a) introduce Multi-hop Attention Graph Neural Network (MAGNA) that uses diffusion to extend attention to multi-hop connections. Frankel et al. (2021) extend GAT attention to a stochastically-sampled neighborhood of neighbors within 5-hops of the central node. Isufi et al. (2020) introduce EdgeNets, which enable flexible multi-hop diffusion. Luan et al. (2019) generalizes spectral graph convolution and GCN in block Krylov subspace forms.
Each layer of our GNN attends to the r-hop neighborhood around each node. Unlike SPAGAN and Graph-BERT, our method is model agnostic and does not perform sampling, avoiding their sampling-ratio and number-of-iterations hyperparameters. Unlike GTN, we do not restrict to a particular graph structure. Broadly, our approach does not require architecture or optimization changes. Thus, our work also joins a trend of decoupling the input graph from the graph used for information propagation (Veličković, 2022). For scalability, Hamilton et al. (2017) sample from a node’s local neighborhood to generate embeddings and aggregate features, while Zhang et al. (2018) sample to deal with topological noise. Rossi et al. (2020) introduce Scalable Inception Graph Neural Networks (SIGN), which avoid sampling by precomputing convolutional filters. Kipf and Welling (2017) preprocess diffusion on graphs for efficient training. Topping et al. (2021) use graph curvature to rewire graphs and combat over-squashing and bottlenecks.
In contrast, our work does not use diffusion, curvature, or sampling, but expands receptive fields via Transformer-inspired positional encodings. In this sense, we avoid the inductive biases from pre-defined notions of diffusion and curvature, and since we do not remove connectivity, injective lossless changes are easy to obtain.
3 PRELIMINARIES AND DESIGN
Let G = (V,E, fv, fe) denote a graph with nodes V ⊂ N0 and edges E ⊆ V × V , and let G be the set of graphs. For each graph, let functions fv ∶ V → Rdv and fe ∶ E → Rde denote node and edge features, respectively. We consider learning on graphs, specially node classification and graph classification. At inference, the input is a graph G. For node classification, the task is to predict
a node label lv(v) ∈ R for each vertex v ∈ V . Using the node labels, the homophily of a graph is defined as the fraction of edges that connect nodes with the same labels (Ma et al., 2022). For graph classification, the task is to predict a label lG ∈ R for the entire graph G. Given the tasks above, GNN architectures typically ingest a graph G = (V,E, fv, fe) and output either a label or a per-node feature. One can view these as an abstraction; e.g. a GNN for graph classification is a map Fθ ∶ G → Rn with learnable parameters θ. These architectures vary in terms of how they implement Fθ. Some key examples include the following: (i) Spatial models (Kipf and Welling, 2017) use the graph directly, computing node representations in each layer by aggregating representations of a node and its neighbors (1-ring). (ii) Spectral models (Bruna et al., 2014) use the eigendecomposition of the graph Laplacian to perform spectral convolution. (iii) Diffusion models (Wang et al., 2021a; Klicpera et al., 2019) use weighted sums of powers of the adjacency matrix to incorporate larger neighborhoods (r-hops). (iv) In Transformers (Kreuzer et al., 2021; Dwivedi and Bresson, 2021; Rong et al., 2020; Ying et al., 2021), each node forms a new representation by self-attention over the complete graph (R-hop neighborhood) using positional encodings. These approaches incorporate useful inductive biases while remaining flexible enough to learn from data.
Spatial models have been extremely successful, but recent work shows that they struggle with underreaching and over-squashing (Alon and Yahav, 2021). Spectral approaches share similar convolutional bias as spatial models and face related problems (Kipf and Welling, 2017). On the other hand, Transformers with complete attention and diffusion aim to alleviate the shortcomings of spatial models and show promising results. Due to complete attention, Transformers carry little inductive bias but are also computationally expensive. Diffusion explicitly incorporates the inductive bias that distant nodes should be weighted less in message aggregation; limiting its breadth of applicability.
We alleviate under-reaching and over-squashing while avoiding the computational load of complete attention by incorporating a more general proximity bias than diffusion without committing to a specific model. Our method is built on the observation that Fθ can be trained to ingest modified versions of the original graph that better communicate structure and connectivity. Hence, we add new edges, nodes, and features to the input graph. To still convey the original topology of the input graph, we add positional encodings. More formally, we design functions g ∶ G → G that modify graphs and give features to the new nodes and edges. These functions can be prepended to any GNN Fθ ∶ G → Rn as Fθ ○ g ∶ G → Rn. The following are desiderata informing our design of g: (i) ability to capture the original graph, (ii) ability to incorporate long-range connections, (iii) computational efficiency, and (iv) minimal and flexible locality bias. By using positional encodings and maintaining the original graph G as a subgraph of the modified graph, we capture the original graph in our modified input (Section 4.2.1). By expanding the receptive field around each node to r-hop neighborhoods we reduce computational load relative to complete-graph attention, with limited inductive bias stemming from proximity. Additionally, expanded receptive fields alleviate under-reaching and over-squashing (Section 6.1).
4 APPROACH
We modify graphs before inputting them to a downstream GNN model, instead of modifying the model itself. Our approach does not remove edges or nodes in the original graph but only adds elements. Given input G = (V,E, fv, fe), we create a new graph G′ = (V ′,E′, f ′v, f ′e) such that G is a subgraph of G′. Expanded receptive fields are achieved in G′ by adding edges decorated with positional encodings as node or edge attributes; we also add a fully-connected CLS node. G′ is still a graph with node and edge attributes to which we may apply any GNN. This process is represented by a function g ∶ G → G. We decompose the construction of g into topological rewiring and positional encoding, detailed below. In a slight abuse of notation, we will subsequently use G to denote only the subset of graphs relevant to a given machine learning problem. For example, for graph regression on molecules, G denotes molecule graphs, with atoms as nodes and bonds as edges.
4.1 TOPOLOGICAL REWIRING
We modify the input graph G to generate G′ in two steps:
Expanded receptive field. Given a graph G = (V,E, fv, fe) ∈ G and a positive integer r ∈ N+, we add edges between all nodes within r hops of each other in G to create G′r = (V,E′, f ′v, f ′e). If G is annotated with edge features, we assign to each edge in E′/E an appropriate constant feature Ce. CLS node. Following Gilmer et al. (2017), we also include a “CLS”—or classification—node to our graph connected to all others. We follow this procedure: Given a graph G, we (i) initialize a new graph G′ = (V ′,E′, f ′v, f ′e) = G, (ii) add a new node vCLS to V ′, and (iii) set f ′v(vCLS) ∶= Cv for a constant Cv . Finally, we set E′ ∶= E ∪ ⋃v∈V {(vCLS, v), (v, vCLS)}, with f ′e((vCLS, v)) = f ′e((v, vCLS)) ∶= Ce, where Ce is defined above.
4.2 POSITIONAL ENCODINGS
Given only the connectivity of a rewired graph G′r = (V ′,E′, f ′v, f ′e) from the two-step procedure above, it may not be possible to recover the connectivity of the original graph G = (V,E, fv, fe). In the extreme, when r is large and G is connected, G′r could become fully-connected, meaning that all topology is lost—removing the central cue for graph-based learning. To combat this, we encode the original topology of G into G′r via positional encodings, which are node and/or edge features. We consider several positional encoding functions for edges pe ∶ G × V ′ × V ′ → Rn or nodes pv ∶ G × V ′ → Rn, appending the output of pe as edge or pv as node features to G′r. Section 4.2.1 lays out properties to compare choices of pe and/or pv . Then, Section 4.2.2 provides concrete positional encodings compared in our experiments that trade off between the properties we lay out.
4.2.1 PROPERTIES OF POSITIONAL ENCODINGS
There are countless ways to encode the subgraph topology of G within G′ in vertex features pv or edge features pe. Below, we state a few properties we can check to give a framework for comparing the capabilities and biases of possible choices.
Lossless encoding. While a GNN can ignore information in input G′, it cannot reconstruct information that has been lost in constructing G′ from G. Yet, there can be benefits in forgetting information, e.g. when dealing with noisy graphs or incorporating a stronger inductive bias (Rossi et al., 2020; Klicpera et al., 2019). That said, a simple property to check for G′ equipped with positional encoding features pe, pv is whether we can recover G from this information, that is, whether our encoding is lossless (or non-invasive). As long as it is possible to identify G within g(G), g is an injection and non-invasive. Hence, a sufficient condition for lossless positional encodings is as follows: If all edges in G′ have unique positional encodings, then g ∶ G → G is a bijection. One way to achieve this condition is to use an additional edge feature that is unique to the 1-ring.
Discriminative power. Following work investigating the discriminative power of GNNs (Xu et al., 2019; Brüel Gabrielsson, 2020), Ying et al. (2021) showed that expanded receptive fields together with shortest-path positional encodings are strictly more powerful than the 1-Weisfeiler-Lehman (WL) test and hence more powerful than 1-hop vanilla spatial GNN models (Xu et al., 2019). The combination of increased receptive fields, positional encodings, and choice of subsequent GNN models determines discriminative power. In fact, it follows from (Ying et al., 2021) that the positional encodings presented below together with an increased receptive field r > 1 and a vanilla spatial GNN model are strictly more powerful than the 1-WL test.
Computational time. Positional encodings may come at substantial computational cost when working with r-hop neighborhoods. The cost of computing positional encodings affects total inference time, which may be relevant in some learning settings. However, in our setting the runtime of computing positional encodings is an order of magnitude less than the subsequent inference time, and in our implementation the asymptotic runtimes of computing the positional encodings are the same. See Appendix E.
Local vs. global. The positional encoding of a vertex or edge can be local, meaning it incorporates information from a limited-sized neighborhood in G, or global, in which case adding or removing a node anywhere in G could affect all the positional encodings.
Inductive bias. Our positional encodings can bias the results of the learning procedure, effectively communicating to the downstream GNN which properties of G and G′ are particularly important for learning. Without positional encodings, our model would induce a bias stating that distances < r in our graph are insignificant. More subtly, suppose ℓ is the distance (of length ≤ r) between two
nodes in G corresponding to a new edge in E′. Using ℓ directly as positional encoding rather than a decaying function, e.g. e−αℓ, makes it easier or harder (resp.) to distinguish long distances in G.
A related consideration involves whether our model can imitate the inductive bias of past work. For example, graph diffusion has been used to incorporate multi-hop connections into GNNs using fixed weights (Wang et al., 2021a). We can ask whether our positional encodings on G′ are sufficient to learn to imitate the behavior of a prescribed multi-hop model on G, e.g. whether a layer of our GNN applied to G′ can capture multi-hop diffusion along G.
Over-squashing and under-reaching. Section 6.1 demonstrates, via the NeighborsMatch problem (Alon and Yahav, 2021), that increased receptive fields as well as the CLS-node alleviate oversquashing; however, this toy problem is concerned with matching node attributes and not with graph topology. We want positional encodings that alleviate over-squashing in the sense that it enables effective information propagation for the task at hand. Our experiments showing that expanded receptive fields alleviate the over-squashing problem and that the best performing positional encoding varies across datasets showcase this. Additionally, our experiments on the discriminative power of positional encodings in Appendix D further help discern the different options.
4.2.2 POSITIONAL ENCODING OPTIONS
Taking the properties above into consideration, we now give a few options for positional encodings below, compared empirically in Section 6.
Shortest path. For any edge e ∈ G′r, the shortest-path positional encoding takes pe ∈ {0,1, . . . , r} to be the integer length of the shortest path in G between the corresponding nodes of E. These embeddings are lossless because G is the subgraph of g(G) with pe = 1. They also are free to compute given our construction of G′r from G. But, multiple vertices in the r-neighborhood of a vertex in V could have the same positional encoding in V ′, and shortest path lengths are insufficient to capture complex inductive biases of multi-hop GNNs like diffusion over large neighborhoods. Shortest-path positional encoding was previously used by Ying et al. (2021), for extending G to a fully-connected graph, but they did not consider smaller r values.
Spectral embedding. Laplacian eigenvectors embed graph vertices into Euclidean space, providing per-vertex features that capture multi-scale graph structure. They are defined by factorizing the graph Laplacian matrix, ∆ = I−D−1/2AD−1/2, where D is the degree matrix and A is the adjacency matrix. We call the result a spectral positional embedding. We can use the q smallest non-trivial Laplacian eigenvectors of G as a node-based positional encoding pv ∶ V ′ → Rq . Following Dwivedi et al. (2020), since these eigenvectors are known only up to a sign, we randomly flip the sign during training. Prior work consider Laplacian eigenvectors as additional node features without topological rewiring (Dwivedi et al., 2020).
Spectral positional encodings do not necessarily make g injective. Even when q = ∣V ∣, this encoding fails to distinguish isospectral graphs (Von Collatz and Sinogowitz, 1957), but these are rarely encountered in practice. On the other hand, spectral signatures are common for graph matching and other tasks. Moreover, unlike the remaining features in this section, spectral positional encodings capture global information about G rather than only r-neighborhoods. Finally, we note that the diffusion equation for graphs can be written as ut = −∆u; this graph PDE can be solved in closed-form given the eigenvectors and eigenvalues of ∆. Hence, given the spectral embedding of G in G′, we can simulate diffusion-based multi-hop GNN architectures up to spectral truncation.
Powers of the adjacency matrix. Our final option for positional encoding generalizes the shortest path encoding and can capture the inductive biases of diffusion-based GNNs. The entry at position (i, j) of the k-th power Ak of the adjacency matrix A of graph G gives the number of paths of length k between node i and j in G. Concatenating the powers from k = 1, . . . , r, we get for each edge e in G′ an integer vector pe ∈ Nr giving the powers of the adjacency matrix positional encoding. This embedding can be used to recover the shortest-path embedding. This adjacency-derived embedding can also generalize the inductive bias of diffusion-based multi-hops GNNs. In particular, diffusion aggregation weights are often approximated using a Taylor series, W = ∑∞i=0 θiAi ≈ ∑ri=0 θiAi ∶= W , where θi are a prescribed decaying sequence (θi > θi+1). The entries of W above can be computed linearly from the adjacency-powers positional encoding. Hence, it is strictly more general than using prescribed diffusion-based aggregation weights on G.
Lossless encodings. The previously discussed lossless-encoding properties of our graph rewiring method are accomplished by two of the above-mentioned positional encodings:
Proposition 1. Shortest-path and adjacency matrix positional encodings yield lossless rewirings.
Proof. Recovering the original graph G = (V,E) from the rewired graph G′ = (V,E′) is almost trivial. With the shortest-path position encoding the original graph can be recovered via E = {e∣e ∈ E′, pe = 1} and for powers-of-the-adjacency-matrix encodings via E = {e∣e ∈ E′, (pe)1 = 1}.
5 IMPLEMENTATION DETAILS
Our method is compatible with most GNN architectures. Here we adopt GatedGCN (Bresson and Laurent, 2018), MoNet (Monti et al., 2017), and an implementation of the Transformer (Vaswani et al., 2017); see Appendix B for details. For each model, we consider graph rewiring with a different r-hop receptive field around each node, and compare with and without the CLS-node, as well as the three positional encodings introduced in Section 4.2.2.
Input and readout layers. Typically, GNNs on a graph G = (V,E, fv, fe) first embed node features fv and edge features fe through a small feed-forward network (FFN) input layer. When incorporating positional encodings per edge/node, we embed using a small FFN and add them at this input layer. After this layer, it updates node and edge representations through successive applications of GNN layers. Lastly, a readout layer is applied to the last GNN layer L. For node classification, it is typically a FFN applied to each node feature hLi . For graph classification, it is typically an FFN applied to the mean or sum aggregation of all node features hL. For graph classification and when using the CLS-node, we aggregate by applying the FFN to the CLS-node’s features in the last layer.
6 EXPERIMENTS
We evaluate performance on six benchmark graph datasets: ZINC, AQSOL, PATTERN, CLUSTER, MNIST, and CIFAR10 from (Dwivedi et al., 2020). The benchmark includes a training time limit of 12 hours; we use similar compute to their work via a single TeslaV100 GPU. Training also stops if for a certain number of epochs the validation loss does not improve (Dwivedi et al., 2020). Thus, our experiments consider the ease of training and efficient use of compute. For the first two datasets, we run GatedGCN, MoNet, and Transformer to show that rewiring and positional encoding work for different models; for the other datasets we run only GatedGCN to focus on the effects of receptive field size, the CLS node, and positional encodings. For all datasets, we run with increasing receptive fields, with different positional encodings, and with or without the CLS-node. In the tables, density is the average of the densities (defined as the ratio ∣E∣/∣V ∣2) of each graph in the dataset rewired to the respective receptive field size. See Appendix A for details.
Table 1 compares our best results with other top performing methods and models. All our top performing models come from the GatedGCN, although the Transformer performs comparably; however, the Transformer was harder to train—see Appendix B. MoNet performs worse but still sees significant improvements from our approach. Our GatedGCN implementation was taken from the same work (Dwivedi et al., 2020) that introduced the benchmarks and code that we use. Thus, hyperparameters might be better adapted to the GatedGCN. This highlights the benefits of our modelagnostic approach, which allows us to pick the best models from Dwivedi et al. (2020) and combine them with our methods. Our approach with 100K parameters achieves state-of-the-art on all datasets among models with 100K parameters and even outperforms 500K-parameter models.
ZINC, Graph Regression. ZINC consists of molecular graphs and the task is graph property regression for constrained solubility. Each ZINC molecule is represented as a graph of atoms with nodes and bonds as edges. In Table 2 we present results for r from 1 to 10. The density column shows that these graphs are sparse and that the number of edges increases almost linearly as the receptive field r is increased. Performance across all settings noticeably improves when increasing r above 1. Top performance is achieved with the CLS-node and powers-of-the-adjacency positional encoding at r = 4, and at 52% of the edges and compute compared to complete attention. When using the CLS node and/or spectral positional encodings, top performance generally occurs at lower r, which is likely due to the global nature of these changes to the graphs. The GatedGCN and
Transformer perform comparably for the same settings, with a slight edge to the GatedGCN. The two models show the same performance trends between settings, i.e., both increased receptive fields and the CLS-node boost performance. Further, Ying et al. (2021) include a performance of 0.123 on ZINC with their Graphormer(500K), i.e., a Transformer with positional encodings and complete attention. However, their training is capped at 10,000 epochs while ours is capped at 1,000 epochs; training their Graphormer(500K) with same restrictions leads to a score of 0.26 on ZINC.
AQSOL, Graph Regression. AQSOL consists of the same types of molecular graphs as ZINC. The densities of AQSOL graphs are slightly higher than those of ZINC. For all settings not including CLS-node or spectral positional encodings, performance improves significantly when increasing r above 1 (see Table 3); in these settings, better performing r are larger than for ZINC. However, when including CLS node or spectral positional encodings, performance changes much less across different r. This indicates the importance of some form of global bias on this dataset. At least one of larger r values, spectral positional encoding, or the CLS-token is required to provide the global bias, but the effect of them differs slightly across the two models. GatedGCN performs significantly better, and larger r-values still boosts performance when combined with the CLS-token for MoNet, but not for GatedGCN. MoNet uses a Bayesian Gaussian Mixture Model (Dempster et al., 1977) and since MoNet was not constructed with edge-features in mind, we simply add edge embeddings to the attention coefficients. Not surprisingly, this points to the importance of including edge features for optimal use of expanded receptive fields and positional encodings.
CLUSTER, Node Classification. CLUSTER is a node classification dataset generated using a stochastic block model (SBM). The task is to assign a cluster label to each node. There is a total of 6 cluster labels and the average homophily is 0.34. CLUSTER graphs do not have edge features. Table 4 gives results for r-hop neighborhoods from 1 to 3. As can be seen in the density column, at r = 3 all graphs are fully connected, and more than 99% of them are fully connected at r = 2. Hence, these graphs are dense. Significant improvements are achieved by increasing r for all but the spectral positional encoding (again showcasing its global properties), which together with the CLS node perform competitively at r = 1. The CLS node is helpful overall, especially at r = 1. The GatedGCN and Transformer perform comparably for all but the spectral positional encodings where the Transformer breaks down. We found that this breakdown was due to the placement of batch normalization, discussed in Appendix B.1.
PATTERN, Node Classification. The PATTERN dataset is also generated using a SBM model, and has an average homophily of 0.66. The task is to classify the nodes into two communities and graphs have no edge features. Table 5 shows results for r-hops from 1 to 3. Similarly to CLUSTER, the density column shows that the graphs are dense. Significant improvements are achieved by increasing r > 1 and/or using the CLS-node. Performance generally decreases at r = 3. Similarly to CLUSTER, the CLS-node helps at r = 1, but for both CLUSTER and PATTERN, top performing model comes from a larger r > 1 without the CLS-node, suggesting that trade-offs exist between CLS-node and increased receptive fields. Compared to CLUSTER, our approach shows less performance boost for PATTERN, which lead us to hypothesize that our approach is more helpful for graphs with low homophily which we investigate further in Appendix F.
MNIST, Graph Classification. MNIST is an image classification dataset converted into super-pixel graphs, where each node’s feature includes super-pixel coordinates and intensity. The images are of handwritten digits, and the task is to classify the digit. Table 6 summarizes results for r from 1 to 3. Not all graphs are fully connected at r = 3, but training at r = 4 exceeds our memory limit. Noticeable performance gains are achieved at r = 2, but performance generally decreases at r = 3. The CLS-node consistently improves performance at r = 1 but not otherwise, indicating that the CLS-node and increased r-size have subsumed effects.
CIFAR10, Graph Classification. CIFAR10 is an image classification dataset converted into superpixel graphs, where each node’s features are the super-pixel coordinates and intensity. The images consist of ten natural motifs, and the task is to classify the motif, e.g., dog, ship, or airplane. Table 7 provides results for r from 1 to 3. Not all graphs are fully connected at r = 3, but training at r = 4 led to out-of-memory issues. Top performing versions are all at r = 1, and performance degrades for r > 1. As with MNIST, the CLS-node only improves performance at r = 1, again indicating its shared (subsumed) effects with increased r-sizes.
6.1 NEIGHBORSMATCH, OVER-SQUASHING
Alon and Yahav (2021) introduce a toy problem called NeighborsMatch to benchmark the extent of over-squashing in GNNs, while controlling over-squashing by limiting the problem radius rp. The graphs in the dataset are binary trees of depth equal to the problem radius rp. Thus, the graphs are
structured and sparse, and the number of edges grows linearly with the increased receptive field r. See Figure 1, Appendix C, for results with GatedGCN. Increasing the receptive field r with a step of 1 increases the attainable problem radius with a step of 1, while using the CLS-node at r = 1 falls in between the performance of r = 2 and r = 3 but with a much longer tail. Thus, this further showcases the subsumed as well as different effect (complementary and conflicting) the receptive field and the CLS-node have, as also observed on the other benchmarks.
6.2 COMPUTATIONAL ANALYSIS
For all positional encodings, the number of edges determines the asymptotic runtime and memory use. The CLS-node only introduces an additive factor. Figures 4 and 5 in Appendix E show that the runtime in practice scales roughly the same as the density, as the receptive field size is increased; though real runtime has a significant constant factor.
6.3 SELECTING POSITIONAL ENCODING AND HOPS SIZE
We recommend the adjacency positional encodings together with the CLS-node. In terms of ranked performance across the 6 datasets, adjacency- and spectral positional encodings perform the same, but the spectral encoding performs considerably worse on the ZINC dataset, while the differences are smaller on the other datasets. Additional experiments in Appendix D, Figure 2, assess the discriminative power of the different encodings. However, there is no positional encoding superior in all aspects. Instead, each one has unique benefits as well as drawbacks. This is made apparent by considering r as a parameter and observing the performance differences across values of r. Furthermore, the CLS-node is part of the best-performing configuration more often than not. Similarly, no fixed r is optimal for all datasets. Instead, optimal r depends on the dataset and the amount of compute. Appendix F shows that increased r diminishes the reliance on homophily as an inductive bias, and thus low homophily of a dataset could be used as an indicator for selecting an increased r. If the density does not change much from a change in r then neither does performance. The use of the spectral positional encodings, the CLS-node, or increased r have subsuming effects for multiple datasets; here the CLS-node or spectral positional encodings may be preferred, computationally cheaper, alternatives to increasing r.
From this empirical study, for picking optimal r, we recommend computing the densities for increasing r and picking the first one where the average density exceeds 0.5 to reap most of the performance boosts. This seems to maintain a helpful locality bias as well as to significantly reduce the compute compared to complete attention. See Appendix G for further discussion.
7 DISCUSSION
Our simple graph rewiring and positional encodings achieve state-of-the-art performance, widening receptive fields while alleviating over-squashing. This is much due to the ability to easily apply our method to models that stem from a large body of work on GNNs, highlighting the benefits of our model-agnostic approach.
The reality is that attention with complete receptive fields is still computationally intractable for most practitioners and researchers. However, here we show that significant performance boosts via attention and increased receptive fields can be obtained by increasing the receptive field only slightly. Thus, opening up recent work to a broader range of practitioners as well as giving more fair conditions for comparing GNNs. In addition, the systematic investigation of increased receptive fields and positional encodings gives further insights into the necessity of homophily for the success of GNNs and highlights other implicit biases in GNN architectures.
A TRAINING DETAILS
Both code and training follow Dwivedi et al. (2020) closely, and to a lesser extent (Dwivedi and Bresson, 2021), which uses the same code base.
Like (Dwivedi et al., 2020), we use the Adam optimizer (Kingma and Ba, 2015) with the same learning rate decay strategy. The initial learning rate is set to 10−3 and is reduced by half if the validation loss does not improve after a fixed (”lr schedule patience”) number of epochs, either 5 or 10. Instead of setting a maximum number of epochs, the training is stopped either when the learning rate has reached 10−6 or when the computational time reaches 12 hours (6 hours for NeighborsMatch). Experiments are run with 4 different seeds; we report summary statistics from the 4 results.
Below we include training settings for the different datasets.
A.1 ZINC
"model": GatedGCN and Transformer, "batch_size": 128, "lr_schedule_patience": 10, "max_time": 12
A.2 AQSOL
"model": GatedGCN and MoNet, "batch_size": 128, "lr_schedule_patience": 10, "max_time": 12
A.3 CLUSTER
"model": GatedGCN and Transformer, "batch_size": 48 (GatedGCN), 32 or 16 (Transformer), "lr_schedule_patience": 5, "max_time": 12
A.4 PATTERN
"model": GatedGCN, "batch_size": 48, "lr_schedule_patience": 5, "max_time": 12
A.5 MNIST
"model": GatedGCN, "batch_size": 128, "lr_schedule_patience": 10, "max_time": 12
A.6 CIFAR10
"model": GatedGCN, "batch_size": 128, "lr_schedule_patience": 10, "max_time": 12
A.7 NEIGHBORSMATCH
"model": GatedGCN,
"batch_size": 256, "lr_schedule_patience": 10, "max_time": 6
B TRANSFORMER IMPLEMENTATION
We implemented a simple version of the Transformer adapted to graphs:
ĥli =BN(hl−1i ) ˆ̂ hli = ∥Hk=1 ( ∑
j∈Ni∪{i} al,ki,jW l kĥ l−1 j ) + hl−1i
hli =FFN(BN( ˆ̂ hli)) + ˆ̂ hli
with êli,j =BN(el−1i,j ) âl,ki,j =((A l kĥ l i)T (Blkĥlj) +Clkêli,j)/d
al,ki,j = exp(âl,ki,j )
∑j′∈Ni∪{i} exp(â l,k i,j′)
eli,j =FFN(êli,j) + el−1i,j
Here, h and e are node and edge features (resp.) from the previous layer. Wk,A,B ∈ Rd/H×d and C ∈ R1×d are learnable weight-matrices, H is the number of attention heads, and BN is short for batch normalization. ∥Hk=1 denotes the concatenation of the attention heads.
B.1 DESIGN CHOICES AND CHALLENGES
There are many variations on the Transformer model. Following Ying et al. (2021), we put the normalization before the multi-head attention, which caused instability when training on CLUSTER with Laplacian (spectral) positional encodings. This was fixed by putting the normalization after or using layer normalization instead of batch normalization; however, these changes reduced performance on ZINC. While the GatedGCN worked well with identical architecture parameters across datasets, we found that the Transformer needed more variations to stay competitive on MNIST and CIFAR10; in particular, fewer layers and larger hidden dimensions.
Transformers use multi-head attention which puts number-of-heads dimension vectors on each edge—seen as directed. Hence, the memory load becomes 2× ∣E∣× num heads (in our experiments, num heads = 6), which compared for GatedGCN is only 2 × ∣E∣. This causes a memory bottleneck for the Transformer that may force one to use a reduced batch size to avoid memory issues.
B.2 OTHER VARIANTS
We implemented other variants, including more involved Transformers. As in (Vaswani et al., 2017), we ran the path-integers through sine and cosine functions of different frequencies, and inspired by (Dai et al., 2019; Ke et al., 2020) we implemented a more involved incorporation of relative positions in the multi-head attention (see below); however, we found performance to be comparable.
In natural language processing, the input is a sequence (a line graph) x = (x1, . . . , xn) of text tokens from a vocabulary set V , with each token having a one-hot-encoding fV ∶ V → [0,1]∣V ∣. The word embeddings E ∈ Rn×d for n tokens are formed as E = (WembedfV(xi) ∣ xi ∈ x) where Wembed ∈ Rd×∣V ∣ is a learnable weight matrix. The original Transformer model used absolute positional encodings. This means that we add the positional encoding to the node embedding at the input layer. Consider a positional encoding function pe ∶ N0 → Rd. Then the first input is
h0 = (WembedfV(xi) + pe(i) ∣ i = 1, . . . , n) = E +U
where U = (pe(i) ∣ i = 0, . . . n) ∈ Rn×d. Typically pe contains sine and cosine functions of different frequencies:
pe(k,2 × l) = sin(k/10000(2×l)/d) pe(k,2 × l + 1) = cos(k/10000(2×l+1)/d)
where k ∈ N is the position and l ∈ N is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from 2π to 10000 × 2π. This function was chosen because it was hypothesized that it would allow the model to easily learn to attend by relative positions, since for any fixed offset m, pe(k +m) is a linear function of pe(k). It was also hypothesized that it may allow the model to extrapolate to sequence lengths longer than the ones encountered during training.
In many cases, absolute positional encodings have been replaced with relative fully learnable positional encodings and relative partially learnable positional encodings (Dai et al., 2019). To justify these, consider the first attention layer with absolute positional encodings:
Aabsi,j = ExiWqWTk ETxj +ExiWqW T k U T j +UiWqWTk ETxj +UiWqW T k U T j
For relative (fully and partially) learnable positional encodings we have instead:
Areli,j = ExiWqWTk,EETxj +ExiWqW T k,RR T i−j + uWTk,EETxj + vW T k,RR T i−j
where u, v ∈ R1×d are learnable weights and Ri−j ∈ R1×d is a relative positional encoding between i and j. Each term has the following intuitive meaning: term (1) represents content-based addressing, term (2) captures a content-dependent positional bias, term (3) governs a global content bias, and (4) encodes a global positional bias.
For relative fully learnable positional encodings, WTk,RR T i−j is a learnable weight in Rd×1 for each i− j ∈ N, while for relative partially learnable positional encodings Ri,j = pe(∣i− j∣) where pe is the sinusoidal function from before.
We implemented both fully and partially learnable positional encodings for the shortest-path positional encodings (integer-valued) and related versions for the other positional encodings (in Rd). We include results in Tables 8 and 9.
C OVER-SQUASHING
Results for over-squashing experiment can be found in Figure 1.
D ADDITIONAL EVALUATION OF POSITIONAL ENCODINGS
Here we provide a start to toy data and a task for comparing positional encodings. In this task we wish to assess how powerful the positional encodings are in practice, i.e. how well they discriminate between different graph isomorphism classes. Specifically, we generate 100 random Erdos graphs and then expand the receptive field so that the graph is fully connected. Thus, the positional encodings become the mere instrument for communicating the connectivity/topology of the original graph. The task is to retrieve a specific graph among all 100 graphs, i.e. the task is graph classification and there is a 100 classes. Hence, achieving 100% accuracy means that the GNN, based on the positional encodings, has been able to discriminate between all graphs. We only look at train accuracy here, since we’re interested in the power to overfit, not to generalize. Results can be found in Figure 2.
All positional encodings are able to solve that task after a sufficient amount of training, besides Adj10. Adj-5 and Adj-10 encode the adjacency matrix to the power of 5 and 10 respectively (at both points all graphs are fully connected). Adj-10 encodes between any two nodes the number of paths of length 10, number of path of length 9, and so on. The experiments indicate that too much such information confuses the GNN and makes it harder to discriminate between graphs. The shortest and Adj-5 positional encodings are the fastest at solving the task. This can be due to the fact that the Laplacian positional encoding is only unique up to a sign and that we randomly switch the sign during training.
E COMPUTATIONAL RUNTIME AND MEMORY USE
In our implementation, the step of computing the positional encodings as well as expanding the rhops of the graph is done in the same process for shortest-path and adjacency positional encodings; thus this step always occur and we found that implementing it via iterative matrix multiplications of the adjacency matrix gave the fastest results. How this scales with the r-size can be found in Figure 3. Since each increment of the r-size results in an additional matrix multiplication, the linear increase is expected. The spectral positional encoding has the same additive runtime per graph across r-sizes of 1.3 × 10−3 seconds. These matrix multiplications are done on CPU rather than GPUs, but running them on GPUs could results in speed-ups. However, the runtime for computing these positional encodings is at least an order of magnitude smaller (per graph) than the runtime for running the subsequent GNN on a GPU, so there was no need to optimize this runtime further.
In Figures 4 and 5 we include actual runtime of the GNN (on GPU) of different positional encodings and hops sizes, juxtaposed with the density of the modified graphs, for the ZINC and CIFAR10 datasets. Note, we are here excluding the computation of the positional encoding on the input graph, which can be found in Figure 3.
Most graphs to which GNNs are applied to are connected and typically the number of edges are greater than the number of nodes, i.e. ∣E∣ ≥ ∣V ∣. Since all established GNNs make use of the edges in one way or another, the number of edges usually determines the asymptotic behavior of the runtime and memory use, i.e. they are in O(∣E∣). With modern deep learning and specialized graph learning framework, GPU-parallelization and other more technical aspect affect memory and runtime. Thus, Figures 4 and 5 compare theoretical runtime (dominated by the density) with actual runtime of code run on GPUs. We find that density and actual runtime is strongly correlated. In Figure 6 we include the memory use for increasing radius on ZINC dataset, and find its roughly linear with the density as well.
F HOMOPHILY SCORE AND PERFORMANCE
We include experiments to investigate the correlation between homophily score (Ma et al., 2021) and performance when increasing hops size. This applies to the node classification datasets, CLUSTER and PATTERN, that we used. We split the test set into three buckets, which is just a sorted segmentation of the graphs with increasing homology scores. We evaluate trained Gated-GCN models with adjacency positional encodings for r-values 1 and 2 (at r = 2 almost all graphs are fully connected). See Tables 10 and 11 for results.
We find that high homophily score correlates much stronger with performance when r = 1 than it does at r = 2. This indicates that increased r-size diminishes the reliance on homophily as an inductive bias.
G OPTIMAL POSITIONAL ENCODING AND HOPS SIZE
Again, we recommend the adjacency positional encodings together with the CLS-node. We find that in terms of ranked performance on the 6 datasets, adjacency- and spectral positional encodings perform at the same level, but the spectral encoding perform considerably worse on the ZINC datasets, while the differences are smaller on the other datasets. The spectral encoding hardcode global-to-local information on the nodes and the size of the encoding-vector is a hyper parameter; we found performance not to be too sensitive to this hyper-parameter but future work could further investigate this. Spectral embeddings also use less memory as it does not encode its embeddings as edge-features; however, since information still is propagated along edges we find this memory saving to be significant but not asymptotically different. Adjacency encoding breaks down faster as the r-size is increased compare to the other positional encodings, we believe this to be due to the corresponding increase in size of the embedding-vectors and its introducing low-signal information that is also easy to overfit to, e.g. the number of paths of length 10 between two nodes (where any edge can be used multiple times). The Erdos experiments in Appendix D support this observation. However, all in all, the adjacency encoding stands out slightly considering the performance, runtime, memory use, and toy experiments. Furthermore, the CLS-node is part of the best performing configuration more times than it is not, and it has the additional advantage of leading to peak performance at lower r-sizes where in some cases it also has reduced runtime and memory use compared instead to increasing the r-size.
In this work we do not find a fixed r-size that is optimal for all datasets. The optimal r depends on the dataset and the amount of compute available. Given the fixed amount of compute used in our experiments, we found that all the best performance was found at r-size four or smaller. We provide heuristic for selecting a good r-size but ultimately it depends on the amount of compute and memory available. | 1. What is the focus of the paper regarding graph neural networks?
2. What are the strengths and weaknesses of the proposed approach in addressing under-reaching and over-squashing in GNNs?
3. Do you have any concerns or suggestions regarding the choice of r (hop size) and its relationship with homophily ratio?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions or concerns regarding the empirical evaluations and comparisons with other baseline models? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
Recent efforts towards addressing under-reaching and over-squashing in graph neural networks (GNNs) include graph transformers (GTs) with topology-based positional encodings; however, the computational cost is quadratic in nature (since vanilla GTs act on fully connected graphs).
This paper proposes and studies a model-agnostic graph rewiring approach with
edges added between nodes and their r-hop neighbours (the paper studies different values of
r
from 1 to the graph diameter) and
a virtual node connecting all the other nodes of the original graph (to encode global graph information).
To retain the graph toplogical information, the paper also empirically studies three types of positional encodings, viz., (i) shortest path, (ii) Laplacian eigenvectors, (iii) adjacency powers, in the form of node/edge features and shows that GNNs acting on the rewired graph + positional encodings are more effective than traditional baselines (even for small values of
r
) on several datasets.
Strengths And Weaknesses
Strengths
+ The proposed methods are evaluated on node classification, graph classification, regression tasks (two datasets each) and the NeighboursMatch problem (for oversquashing) and compared with seven baseline models.
+ Based on the empirical evaluations of different
r
values with and without virtual nodes and three different positional encodings, the authors conclude that the adjacency positional encoding with virtual nodes works best and an optimal
r
(i.e., hop size) generally depends on the homophily of the dataset.
Weaknesses
- The node classification datasets (PATTERN, CLUSTER) are generated using the stochastic block model (i.e., synthetic in nature).
- The ideas of using positional encodings (PEs) for GNNs on molecular graph regression is not new, see for instance a prior work [Graph Neural Networks with Learnable Structural and Positional Representations, In ICLR'22].
- The idea of using a virtual node connecting all existing graph nodes without affecting the graph topology (i.e., ensuring that there is an inverse map back to the original graph) is also not new, see for instance a relevant prior work [Boosting Graph Structure Learning with Dummy Nodes, In ICML'22].
Clarity, Quality, Novelty And Reproducibility
Clarity
The paper is generally well-organised and well-written with a few caveats.
In the introductory part of Section 4 (titled Approach), the authors abuse the notation
G
slightly and then use
G
to denote only the subset of graphs relevant to a given machine learning (ML) problem (e.g., molecular graphs).
It is unclear how relevant the rewired graph given by
g
:
G
→
G
would be for the ML problem (i.e, it is unclear if the rewired graph would still be a molecule and if so how relevant would it be).
Quality
The quality of the paper can be strengthened regarding the arguments around the choice of
r
(i.e., the hop size) and the homophily ratio.
The authors investigate correlation between homophily score and performance for increasing
r
on SBM datasets (CLUSTER and PATTERN).
It would be much more compelling to investigate the correlation on real-world node classification datasets with low homophily [1] and high homophily [2].
Large Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods, In NeurIPS'21,
Open Graph Benchmark: Datasets for Machine Learning on Graphs, In NeurIPS'20.
Novelty
The novelty of the work can be strengthened by discussing existing work on using virtual/dummy nodes and positional encodings to boost GNNs.
Boosting Graph Structure Learning with Dummy Nodes, In ICML'22,
Graph Neural Networks with Learnable Structural and Positional Representations, In ICLR'22,
Equivariant and Stable Positional Encoding for More Powerful Graph Neural Networks, In ICLR'22.
Reproducibility
The code is not provided although the main part and the appendix include enough material, e.g., dataset details, baselines with references, hyperparameters, for an expert to replicate the results of the paper. |
ICLR | Title
On Flat Minima, Large Margins and Generalizability
Abstract
The intuitive connection to robustness and convincing empirical evidence have made the flatness of the loss surface an attractive measure of generalizability for neural networks. Yet it suffers from various problems such as computational difficulties, reparametrization issues, and a growing concern that it may only be an epiphenomenon of optimization methods. We provide empirical evidence that under the cross-entropy loss once a neural network reaches a non-trivial training error, the flatness correlates (via Pearson Correlation Coefficient) well to the classification margins, which allows us to better reason about the concerns surrounding flatness. Our results lead to the practical recommendation that when assessing generalizability one should consider a margin-based measure instead, as it is computationally more efficient, provides further insight, and is highly correlated to flatness. We also use our insight to replace the misleading folklore that smallbatch methods generalize better because they are able to escape sharp minima. Instead we argue that large-batch methods did not have enough time to maximize margins and hence generalize worse.
1 INTRODUCTION
Understanding under which conditions a neural network will generalize from seen to unseen data is crucial, as it motivates design choices and principles which can greatly improve performance. Complexity or generalization measures are used to quantify the properties of a neural network which lead to good generalization. Currently however, established complexity measures such as VC-Dimension (Vapnik, 1998) or Rademacher Complexity (Bartlett & Mendelson, 2002) do not correlate with the generalizability of neural networks (e.g. see Zhang et al. (2016)). Hence many recommendations, such as reducing model complexity, early stopping, or adding explicit regularization are also not applicable or necessary anymore. Therefore, there is an ongoing effort to devise new complexity measures that may guide recommendations on how to obtain models that generalize well.
A popular approach is to consider the flatness of the loss surface around a neural network. Hochreiter & Schmidhuber (1997) used the minimum description length (MDL) argument of Hinton & Van Camp (1993) to claim that the flatness of a minimum can also be used as a generalization measure. Motivated by this new measure Hochreiter & Schmidhuber (1997), and more recently Chaudhari et al. (2019), developed algorithms with explicit regularization intended to converge to flat solutions. Keskar et al. (2016) then presented empirical evidence that flatness relates to improved generalizability and used it to explain the behavior of stochastic gradient descent (SGD) with large and small-batch sizes. Other works since have empirically corroborated that flatter minima generalize better (e.g. Jiang et al. (2019); Li et al. (2018); Bosman et al. (2020)).
There are however various issues that are still unresolved, which makes using flatness for constructing practical deep learning recommendations difficult. For one, flatness is computationally expensive to compute. The most common way to compute the flatness is via the Hessian, which grows quadratically in the number of parameters; this becomes too large when used with modern networks containing millions of parameters. It is also not clear to what extent flatness is a true measure of generalizability, capable of discerning which neural network will or will not generalize. Dinh et al. (2017) showed that reparametrizations affect flatness and a flat model can be made arbitrarily sharp without changing any of its generalization properties. In addition Probably Approximately Correct (PAC-Bayes) bounds that bound the generalizability in terms of the flatness are also either affected
by rescaling, impossible to evaluate or loose (Neyshabur et al., 2017; Arora et al., 2018; Petzka et al., 2020). While there have been solutions attempting to prevent issues around reparametrization (Liang et al., 2019; Tsuzuku et al., 2019), it remains to establish whether flatness is an epiphenomenon of stochastic gradient descent or other complexity measures as Achille et al. (2018) and Jastrzebski et al. (2018) are suggesting. This motivates investigating possible correlations to more well-understood measures of generalization that may help alleviate issues surrounding flat minima, while allowing flat minima to be used when appropriate.
In this paper we will demonstrate a correlation to classification margins, which are a well-understood generalization measure. Margins represent the linearized distance to the decision boundaries of the classification region (Elsayed et al., 2018). An immediate consequence of such a relationship is that to assess generalizability, we could now simply use a computationally cheap and more robust margin based complexity measure. Our contributions will demonstrate further practical implications of the relationship between margins and flatness which open doors to valuable future work such as a better understanding of why and when a model generalizes and more principled algorithm design.
• We prove that under certain conditions flatness and margins are strongly correlated. We do so by deriving the Hessian trace for the affine classifier. Based on its form, we derive an expression in terms of classification margins which we show correlates well with the Hessian trace, with increasing training accuracy for various neural network architectures. By being able relate the two complexity measures, we are now able to provide various practical recommendations, and offer different perspectives on phenomena that may not be explainable without such a view. These are shown in the following contributions.
• We use our insight to replace the misleading folklore that, unlike large-batch methods, small-batch methods are able to escape sharp minima (Keskar et al., 2016). We instead employ a margin perspective and use our empirical results along with recent results by Banburski et al. (2019) and Hoffer et al. (2017) to argue that a large batch method was unable to train long enough to maximize the margins. With our explanation, we help reframe the small and large-batch discussion and build further intuition.
• We show that once a neural network is able to correctly predict the label of every element in the training set it can be made arbitrarily flat by scaling the last layer. We are motivated by the relationship to margins which suffer from the same issue. We highlight this scaling issue because, in some instances, it may still be beneficial for algorithm design to be guided by convergence to flat regions. Hence, we need to account for scaling issues which make it difficult to use flatness to assess whether a network generalizes better than another.
Other works have made connections between flatness and well-behaved classification margins via visualizations (see Huang et al. (2019); Wang et al. (2018)), but they have not demonstrated a quantifiable relationship. Further work has used both the classification margins and flatness to construct PAC-Bayes bounds (Neyshabur et al., 2017; Arora et al., 2018), and have related flatness to increased robustness (Petzka et al., 2020; Borovykh et al., 2019) however they did not show when and to what extent these quantities are related.
We structure the paper as follows. In Section 2, we discuss both our notation and our motivation choosing the cross-entropy loss and the Hessian trace as the flatness measure and provide further background on the classification margins. In Section 3, we present our contribution showing a strong correlation between the margins and flatness by deriving. In Section 4, we combine recent results based on classification margins to offer a different perspective on the misleading folklore on why larger-batch methods generalize worse. In Section 5, we highlight that networks can be made arbitrarily flat. Lastly, we offer our thoughts and future work in the Section 6.
2 PROBLEM SETTING
We first define the basic notation that we use for a classification task. We let X represent the input space and Y = {1, ..., C} the output space where C are the number of possible classes. The network architecture is given by φ : Θ × X → R|Y| where Θ is the corresponding parameter space. We measure the performance of a parameter vector by defining some loss function ` : RC × Y → R. If we have have a joint probability distribution D relating input and output space then we would
like to minimize the expected loss LD(θ) = E(x,y)∼D[`(φ(θ, x), y)]. Since we usually only have access to some finite dataset D, we denote the empirical loss by L̃D(θ) = 1|D| ∑|D| i=1 `(φ(θ, xi), yi). If LD and L̃D are close, then we would say a model generalizes well, as we were able to train on a finite dataset and extrapolate to the true distribution. We will use the cross-entropy loss which is given by `(φ(θ, x), y) = − log(Sy(φ(θ, x))) where the softmax function S : RC → RC is given by S(a)i =
eai∑C j=1 e aj (see Goodfellow et al. (2016)).
The choice of the cross-entropy function as the loss function has a significant impact on how the flatness measure behaves. Unlike the multiclass mean squared error (MMSE), exponential type losses such as the cross-entropy loss on neural networks have been shown to include implicit regularization which leads to margin maximizing solutions for neural networks (Banburski et al., 2019). Also, various properties for flat minima which have been proven for the MMSE loss by Mulayoff & Michaeli are not applicable to the cross-entropy loss, further highlighting the fundamental differences between the loss functions. While the MMSE loss has shown some promise for many classification tasks (Hui & Belkin, 2020) the cross-entropy loss is still the loss which is most used and was primarily used for the empirical evidence around flat minima (Keskar et al., 2016; Chaudhari et al., 2019), which motivates our choice.
The qualitative description of a flat region was given by Hochreiter & Schmidhuber (1997) as “a large connected region in parameter space where the error remains approximately constant". We measure the flatness by the trace of the Hessian of the loss with respect to the parameters (in short the Hessian trace) denoted by Tr(Hθ(L̃D(θ)) (Dinh et al., 2017). Since the Hessian is symmetric, the Hessian trace is equivalent to the sum of its eigenvalues which for a fixed parameter space is proportional to the expected increase of the second order approximation of the loss around a fixed minimum θ in a random direction θ′ with θ′ ∼ N (θ, I). Since we apply flatness arguments only close to minima, we assume that all eigenvalues are positive and that the Hessian trace is a good measure of flatness Sagun et al. (2017). Even though the Hessian is only an approximation of flatness, the Hessian is often preferred as it allows us to reason about various directions in parameter space via its eigenvectors and eigenvalues (see Sagun et al. (2017); Chaudhari et al. (2019)) and alleviates the issue of infinitely long but sharp ridges making a minimum infinitely flat (Dinh et al., 2017; Freeman & Bruna, 2016). The Hessian has also been linked to feature robustness via its use in the second order approximation of the loss (e.g. Petzka et al. (2020); Borovykh et al. (2019)) and is a promising quantity to relate to the margins.
As we are working with non-linear functions it is intractable to compute exact distances to the decision boundary, therefore we use a measure which is related to the linearized distance as described in Elsayed et al. (2018). Under this view, larger margins are better because the data is further from the decision boundary. Specifically, we define the margins as in Neyshabur et al. (2017): for some vector v ∈ RC and label y we let the margin of v be γ(v, y) = |vy − maxj 6=y vj |. Since we use the margin in different contexts we define the output margins γ(φ(θ, x), y) and the margins of the model output after the softmax layer γ(S(φ(θ, x)), y). Due to the intuition of margins relating to the regularity of the classification regions, they have been proven and shown to be a good generalization measure for linear networks (Langford & Shawe-Taylor, 2003) and later for neural networks (see Bartlett et al. (2017); Jiang et al. (2018; 2019)) when correctly adjusted. Due to results by Banburski et al. (2019) and Soudry et al. (2018), Poggio et al. (2019) claimed that a large part of the mystery around generalizability has been solved, since standard optimization methods are maximizing the margin instead of memorizing data.
3 THE MARGIN AND HESSIAN TRACE RELATIONSHIP
3.1 THE AFFINE CROSS-ENTROPY HESSIAN TRACE
Generally, it is difficult to derive a closed form solution of the Hessian trace due to the non-linear nature of neural networks. To gain insight into what may determine the flatness or sharpness of a solution we consider an affine prediction function for which we derive the following simple and insightful expression for the Hessian trace:
Proposition 3.1 (Affine Cross-Entropy Hessian Trace (ACEHT)). Assume an affine predictor given by φ((θ, b), x) = θx+ b where (θ, b) ∈ RC×d × RC = Θ. Then the trace of the Hessian under the cross-entropy loss assuming our predictor function is:
Tr(H(`(φ((θ, b), x), y))) = (|x|2 + 1)(1− C∑ j=1 S2j (φ(Θ, x))
= (|x|2 + 1)(1− |S(φ(Θ, x))|2).
The derivation is in Appendix C. We immediately observe that the trace of the Hessian is a product of both the size of the input and 1 − η(S(φ(θ, x))) where η(S(φ(θ, x))) = ∑C j=1 S 2 j (φ(Θ, x)), where we can view 1 − η(S(φ(θ, x))) as a confidence measure. In the visualization provided in Figure 1 we clearly see that 1 − η(S(φ(θ, x))) is only zero when the predictor predicts one class with probability 1, regardless of whether it is the correct class or not. When the model is least confident, namely when every entry is predicted with probability 1/C, then 1 − η(S(φ(θ, x))) is also highest. Hence, in the affine case with a cross-entropy loss the Hessian trace can be seen as an indication of the model confidence in its prediction. This confidence interpretation is also connected to classification margins by observing that Sy ≥ γ(S(φ(θ, x)), y) and hence (1 − ∑C j=1 S 2 j (φ(Θ, x)) ≤ 1 − S2y((φ(Θ, x))) ≤ 1 − γ2(S(φ(θ, x)), y). Therefore, if the margins are large then the region will also be flat. The intuition for this is that the error in the upper bound becomes smaller as Sy becomes larger, i.e. when the model predicts correctly and confidently. We will also provide evidence for a converse, i.e. a flat minimum has large margins, in the following experimental sections. Finally, we note that without the expression in Proposition 3.1 we would not have been able to derive the upper bound 1− γ2(S(φ(θ, x)), y) without guesswork.
3.2 EXTENSION TO THE NON-LINEAR CASE
Now we will attempt to extend the derivation of the previous section to the non-linear case. This is a challenging undertaking so we will resort to numerical evidence. To extend the results from the affine case we will consider both the ACEHT and the upper bound ACEHT (S(φ(θ, x)))) ≤ |x|(1 − γ2(S(φ(θ, x)), y)) to which we refer as the "margin bound". We will compare both quantities to the empirically derived Hessian trace. To compute the empirical Hessian trace we use the PyHessian package (Yao et al., 2019) which implements Hutchinson’s method (Bai et al., 1996; Avron & Toledo, 2011).
To compare the quantities we will compare them in terms of their distributions over the data. Specifically, let (X,Y ) ∼ D and fix θ then we compute the Pearson Correlation Coefficient (rvalue) (Lee Rodgers & Nicewander, 1988) between the random variables Tr(H(`(φ(θ,X)), Y )) and ACEHT (S(φ(θ,X))) and similarly for the margin bound. The choice of the r-value is natural because in the affine case the ACEHT and the Hessian trace are equivalent, therefore a linear relationship should be expected. Our method is also more general than just comparing some statistic, such as the average (which is generally used for flatness measures), of the above random variables.
For example, while the smallest margin over the dataset is commonly used a generalization measure (Bartlett et al., 2017; Jiang et al., 2019; Neyshabur et al., 2017), Jiang et al. (2018) showed that higher moments of the distribution are a much better predictor for generalizability as we will also see in Section 4.
Figure 2 is an examples of such a fit for an affine predictor. While the high r-value of 0.97 confirms our analytic results, we also observe that the fit is not perfect, as would be expected due to the exact relationship. The inaccuracies are due to the numerical methods used and become more pronounced the higher the Hessian trace is. To avoid outliers heavily impacting the linear regression model in the non-linear case, we will use the SciPy function LocalOutlierFactor (Breunig et al., 2000) to remove outliers before fitting the line. With this we prevent hand picking points to skew the results and will also stabilize our results.
3.2.1 EMPIRICAL EVIDENCE
We present our results using the convolutional neural network LeNet on the MNIST dataset as they are representative of what we have observed on other architectures, hyperparameters, and datsets (see Appendix B). Our results use stochastic gradient descent with a fixed learning rate and batch size to achieve an appropriate performance on the classification task. Because of the computational difficulty of computing the empirical Hessian trace for every single element in the input data, we consider 1,000 randomly selected datapoints from the training-set. To highlight the computational difficulty of using even very optimized numerical tools, such as PyHessian, we note that it takes us roughly 1,5 hours to compute the Hessian trace for the whole MNIST dataset while it only takes 5 seconds for the margins.
In Figure 3 we present the plots related to the correlation of the empirical Hessian trace to the ACEHT and margin bound over the randomly sampled datapoints. Figures 3a and 3b show that for most of training, the correlation is between 0.8 and 1. Combining Figures 3a and 3c it can be seen that the r-value increases with the model training accuracy. Furthermore, the datapoint which are incorrectly predicted do not show a correlation.
With that we confirm the intuition that indeed, flatter solution are more robust and have larger margins. While we have found flatness and margins to be highly correlated in scenarios in which others have identified flatness to be a good generalization measure (Jiang et al., 2019; Keskar et al., 2016; Chaudhari et al., 2019), it may just be that this is also an epiphenomenon of stochastic gradient descent or some other process and there may be situations in which the relationship does not hold. However, our general advice to consider margins more is not impacted by this. In the scenario where generalizability and flatness have been linked, we have also shown that margins and flatness are correlated, hence it is advantageous to use margins instead due to computational reasons or for more complete intuition. The only situation in which it is more likely that margins and flatness are not correlated is when flatness has not yet been linked to generalizability. In such a situation it may also be better to use the better understood margin measure instead of using a flatness measure to assess generalizability. In the next section we will consider the first case, where we examine a general scenario in which flatness has been used to reason about generalizability and offer a more insightful margin perspective.
4 PERSPECTIVE ON LARGE AND SMALL-BATCH METHODS
We now show how our results lead to a better understanding of phenomena which have been misleadingly attributed to flat minima. To do so, we consider the experiments which rekindled the debate around flat minima by Keskar et al. (2016), where flatness was used to explain why smallbatch methods tend to generalize better than large-batch methods. The idea was that small-batch methods converge to flatter minima due to them being able to "escape" sharp minima more easily. However, it has been shown that the minima of both methods appear to be in the same attractive basin (Sagun et al., 2017; Freeman & Bruna, 2016; Draxler et al., 2018), meaning that small-batch methods do not seem to escape any attractive basin but are merely in a different area of the same attractive basin. While the results gave credence to flatter minima generalizing better, flatter minima do not seem to provide the full picture for why large-batch methods tend to do worse and we believe that an explanation in terms of the margins is more illuminating.
4.1 EXPERIMENT SETUP
We will replicate the experiment by Keskar et al. (2016) for a fully connected network with batchnormalized layers on the MNIST dataset as described in Appendix A. We chose the large-batch size to be 4096 and the small-batch size to be 256. To have a fair comparison, we use the same seed and take 10,000 gradient steps for both methods, instead of basing the stopping time on epochs. We also used stochastic gradient descent without Momentum. With our setup we observe a similar phenomenon as Keskar et al. (2016) in Table 1. The small and large-batch method both attain the same training accuracy and comparable training loss. However, the small-batch method is at a considerably flatter minimum and generalizes better than the large-batch method. We will now show that instead of considering the flatness, it would be more insightful to consider margins to explain the difference in generalizability.
While the upper bound of ACEHT is in terms of the softmax margins, we consider the output margins in this section. The reason is that most margin based generalization measures use the output margins. Another more practical reason is that towards the end of training, the softmax margins are all very close to 1 making it difficult to visualize and observe the distribution. We also do not use a normalized version of the margins (such as Bartlett et al. (2017); Jiang et al. (2018)). Our reasoning
is that because we use the same architecture, the same dataset, and train in a similar manner the margin distributions will be comparable.
4.2 A MARGIN PERSPECTIVE ON LARGE AND SMALL-BATCH SIZES
In Figure 4 we see that the output margins and the Hessian trace are correlate as expected from Section 3. We can also roughly see that the small-batch method has fewer low margins than the largebatch methods. To emphasize this difference we consider Figure 4c where we plot the histogram and box-plot of the output margin distribution for both the large-batch and small-batch method. We also display the skewness of each, which is the third moment centered around the mean. The box-plots and the skewness confirm that the small-batch method is dominated by large margins indicating better generalizability (as discussed in Bartlett et al. (2017); Jiang et al. (2018)). The idea with a left-skewed margin distribution is that the tail with low margin datapoints is mostly compromised of outliers and will not massively affect the robustness to input perturbations. This soft-margin SVM perspective is in contrast to hard-margin SVMs where the margin is defined to be the minimum of all the distances to the decision boundary (Shalev-Shwartz & Ben-David, 2014). If a hard-margin view was adopted, then the small-batch method would be predicted to generalize worse, because it has the smallest margin as we see in Figure 4c. However, the distribution of the small-batch method is also more left skewed, which would point to this minimum being an outlier rather than being indicative of generalizability.
We now want to explain why the small-batch method generalizes well. As observed in Jastrzębski et al. (2017) a smaller batch-size is similar to a larger learning-rate, hence at every step the process will advance further than a large batch-method would. It has already been noted by Hoffer et al. (2017) that training longer leads the large-batch method to generalize just as well as the small-batch method because it had time to "catch up", even though the decrease in training loss may be barely noticeable. We have also seen that SGD converges to margin maximizing solutions by Banburski et al. (2019). Therefore, a method that is able to train or advance further, will also be closer to a margin maximizing solution. We therefore expect that large-batch methods not having had enough time to maximize margins is the driving force behind the large vs small-batch phenomenon.
5 BECOMING FLATTER WITH INCREASING MARGINS
Reparametrization problems such as shown by Dinh et al. (2017) are neither a new phenomenon nor should they necessarily discourage the design of algorithms which attempt to find flat minima. Rather they inform on what aspects of a generalization measure need to be adjusted to allow them to be used in a practical setting. For SVMs, the problem of scaling the hyperplane normal to increase margins of correctly classified points is solved by scaling the normal to make it a unit vector, transforming the functional margin into the geometric margin (Shalev-Shwartz & Ben-David, 2014). In the case of neural networks, it is also known that scaling the last layer leads to an increase in the margins for data which has been correctly predicted (Neyshabur et al., 2017). This scaling issues has been successfully addressed (see Bartlett et al. (2017); Elsayed et al. (2018); Jiang et al. (2018)). Due to the relationship to the classification margins it is natural to ask if flatness suffers from a similar problem. We confirm this with the following Proposition: Proposition 5.1. For a given neural network φ let Tα : Θ → Θ be such that for all x ∈ X and θ ∈ Θ we have φ(Tα(θ), x) = αφ(θ, x). Now assume that θ′ ∈ Θ and a datapoint (x′, y′) for which argmaxk∈{1,...,C}(φ(θ, x ′))k = y ′ then
∀s, t ∈ {1, ..., dim(Θ)} lim α→∞ ∂θs∂θt`(φ(Tα(θ ′), x′), y′) = 0. (1)
The proof is in the Appendix D. From the Proposition we immediately derive the following Corollary: Corollary 5.2. Assume that φ and θ predict every datapoint in a set D correctly then
∀s, t ∈ {1, ..., dim(Θ)} lim α→∞ ∂θs∂θtLD(Tα(θ)) = 0. (2)
Due to the Corollary, if a network has achieved full training accuracy, then the network is equivalent under the Tα transformation to an arbitrarily flat network. We note that there exists such a Tα transform for most networks. Scaling the last layer is one simple instance of such a transform. Another is that for fully connected and convolutional networks with ReLU non-linearities we observe that by the non-negative homogeneity scaling each layer also results in a valid Tα transformation. The crucial property of the Tα map is that it does not change the relative order of the model outputs and therefore, given two networks which have achieved full training accuracy we can not determine which network should generalize better based solely on the flatness of the local-geometry. We note that Banburski et al. (2019) mentioned such an issue but they did not discuss it in the context of flat minima and their arguments relied on further structure which we believe is less illuminating than our presentation and proofs.
6 CONCLUSIONS
In this paper, we have related flatness to the classification margins in a principled manner, in contrast to other works that have made a more intuitive or less quantifiable connection (Huang et al., 2019; Wang et al., 2018; Neyshabur et al., 2017; Petzka et al., 2020). Our results lead to the immediate practical recommendation of using margins instead of the computationally expensive flatness to assess generalizability. We also use our results to replace the misleading notion that small-batch methods generalize better because they "escape" sharp minima, instead arguing that small-batch methods have more time to maximize margins. We were also motivated by the flatness and margin relationship to highlight that neural networks can be made arbitrarily flat. This implies that the generalizability of two networks can not be distinguished based on flatness and hence needs to be addressed to make flatness a viable generalization measure. Based on our results, future work may assess whether flatness is an epiphenomenon of the optimization methods, because now recent work on margins (e.g. Banburski et al. (2019); Soudry et al. (2018)) can be applied to reason about flatness. Furthermore, by relating properties of the parameter space (flatness) to properties of the input space (margin) there is now an opportunity to further explore results such as by Sagun et al. (2017), where they found that the Hessian, with respect to the parameters of a neural network upon convergence, has as many positive eigenvalues as the number of classes in the dataset used. Overall, our results enable more principled discussion on how flatness may contribute to generalizability.
A APPENDIX: NETWORK ARCHITECTURE AND DATASETS
A.1 NETWORK ARCHITECTURE
We implement the convolutional neural network LeNet-5 as described in LeCun et al. (1998). Our fully connected neural network with batch normalized layers (FCNBN) is inspired by Keskar et al. (2016). It has a 784-dimensional (MNIST) or 1024-dimensional (CIFAR10) input layer followed by three batch-normalized Ioffe & Szegedy (2015) layers with ReLU non-linearities and a 10- dimensional output layer.
A.2 DATASETS
B APPENDIX: FLATNESS AND MARGIN CORRELATION
Here we present further evidence of the flatness and margin correlation discussed in Section 3. Like in Section 3 we have used appropriate learning rates and batch sizes to get a reasonable performance for the task, and have observed our results to hold for different hyperparameters. One instance where we demonstrate two different batch-sizes is for the Fully Connected Network with Batch Normalization on MNIST (Section B.1) where we present results for a batch size of 256 and 4096. We again only consider 1,000 randomly selected datapoints from the training-set due to the computational difficult of computing the Hessian trace. If the network achieves full training accuracy and there are no incorrectly classified datapoints, we set the r-value to zero.
Overall, we observe the same results as in Section 3 and a correlation between 0.8 and 1. As before, the correlation increases with increasing training accuracy for correctly predicted datapoints.
B.1 FULLY CONNECTED NETWORK WITH BATCH NORMALIZATION ON MNIST
B.1.1 BATCH SIZE: 256
(a) ACEHT to HT (b) SM Bound to HT (c) Training Accuracy
(d) Initialization (e) Step 5000 (f) Final Step (10,000)
B.1.2 BATCH SIZE: 4096
(a) ACEHT to HT (b) SM Bound to HT (c) Training Accuracy
(d) Initialization (e) Step 5000 (f) Final Step (10,000)
B.2 LENET ON CIFAR10
(a) ACEHT to HT (b) SM Bound to HT (c) Training Accuracy
(d) Initialization (e) Step 5000 (f) Final Step (10,000)
B.2.1 FULLY CONNECTED NETWORK WITH BATCH NORMALIZATION ON CIFAR10
(a) ACEHT to HT (b) SM Bound to HT (c) Training Accuracy
(d) Initialization (e) Step 5000 (f) Final Step (10,000)
C APPENDIX: DERIVATIVES OF THE CROSS-ENTROPY LOSS
C.1 GENERAL FORM
For the general form we consider the cross-entropy loss for a predictor function which is scaled by some scalar α. Specifically, we assume an arbitrary input-output pair (x, y) ∈ X × Y and will compute the partial derivatives with respect to the parameters θ of the predictor function αφ(θ, x). Since the equations can become very long we will declutter the notation by letting S = S(αφ(θ, x)), φ = φ(θ, x) and for two d-dimensional vectors x, y ∈ Rd we write 〈x, y〉 = ∑d i=1 xiyi. We also denote elementwise multiplication by and let Φ be a matrix such that (Φ)ij = φj . Lemma C.1. The first partial derivative of the cross-entropy loss with respect to an element θi is:
∂θi`(αφ(θ, x), y) = −α(∂θiφy − C∑ l=1 ∂θiφlSl(φ))
= −α(∂θiφy − 〈∂θiφ, S〉).
Proof. ∂θi`(αφ, y) = ∂θi − log(Sy)
= − 1 Sy ∂θiSy (3)
With some manipulation we compute ∂θiSy:
∂θiSy = ∂θie αφy∑C k=1 e αφk − e αφy ( ∑C k=1 e αφk)2 C∑ l=1 ∂θie αφl
= α∂θiφy eαφy∑C k=1 e αφk − α e αφy∑C k=1 e αφk C∑ l=1 eαφl∑C k=1 e αφk ∂θiφl
= Syα(∂θiφy − C∑ l=1 ∂θiφlSl). (4)
Combining Equations 3 and 4 we obtain Lemma C.1:
∂θi`(αφ, y) = −α(∂θiφy − C∑ l=1 ∂θiφlSl).
Lemma C.2. The second partial derivative of the cross-entropy loss with respect to elements θs and θt is:
∂θt∂θs`(αφ(θ, x), y) = −α(∂θt∂θsφy − 〈∂θt∂θsφ, S〉 − 〈∂θsφ, αS (∂θtφ− (∂θtΦ)S)〉).
Proof. Differentiating the first order derivative given by Lemma C.1 we obtain by the multi-variable chain rule:
∂θt∂θs`(αφ, y) = −α(∂θt∂θsφy − 〈∂θt∂θsφ, S〉 − 〈∂θsφ, ∂θtS〉). (5)
To compute ∂θtS in Equation 5 we use Equation 4 and obtain:
(∂θtS)i = ∂θtSi
= Siα(∂θtφi − 〈∂θtφ, S〉),
which after some simplification reduces to:
∂θtS(φ) = αS (∂θtφ− (∂θtΦ)S). (6)
Combining Equations 5 and 6 we obtain Lemma C.2:
∂θt∂θs`(αφ, y) = −α(∂θt∂θsφy − 〈∂θt∂θsφ, S〉 − 〈∂θsφ, αS (∂θtφ− (∂θtΦ)S)〉).
C.2 AFFINE CROSS-ENTROPY HESSIAN TRACE
We now present the proof of Proposition 3.1:
Proof. Throughout the proof we make use of Lemma C.2 and let α = 1. We also notice that any second derivative with respect to φ((θ, b), x) is zero since φ is an affine classifier.
We first consider the derivatives with respect to elements of θ where we use θi,j to denote the element in the ith row and jth column of the matrix θ. Notice that ∂θi,jφ = xjei which we write as x i j . The second order derivatives are given by:
∂θi,j∂θs,t`(φ, y) = 〈xst , S (xij − xjSi1)〉 = −xt(Ss((xij)s − xjSi)),
when computing the trace we only compute the elements on the diagonal and hence we get:
∂θi,j∂θi,j `(φ, y) = xj(Si(xj − xjSi)) = x2jSi(1− Si)
Now we consider derivatives with respect to elements of b and notice that ∂biφ = ei. For the second derivative we then get:
∂bi∂bj `(φ, y) = 〈ej , S (ei − eiSi)〉 = δijSi(1− Si).
Finally, summing up the diagonal of the total Hessian we get: Tr(H(l(Θ, x, y))) = (|x|2 + 1)(1− ∑ j S2j )
where we used the fact that ∑ i Si = 1.
D APPENDIX: SCALING PROOF
To prove Proposition 5.1 we first prove the following lemma: Lemma D.1. Assume that the argmax of φ is the correct class y and is unique then for k ∈ N, k ≥ 1 and i 6= y:
lim α→∞
αkSi(αφ) = 0 (7)
Proof. Let y be such that φy = maxk∈{1,...,C}φk. For i 6= y we have:
lim α→∞ αk eαφi∑C k=1 e αφk = lim α→∞
αk eα(φy−φi) + ∑C k=1,k 6=y e α(φk−φi)
= lim α→∞
k! (φy − φi)keα(φy−φi) + ∑C k=1,k 6=y(φk − φi)keα(φk−φi)
where the last line follows from applying L’Hopital’s rule k times. Since we assumed that y is the only y ∈ {1, ..., N} such that φy = maxk∈{1,...,N φk, we have that φk < φy for all k 6= y. Hence, as α→∞ we have eα(φk−φy) → 0. Therefore:
lim α→∞
k!eα(φi−φy) (φy − φi)k + ∑C k=1,k 6=y(φk − φi)keα(φk−φy) = 0
We are now ready to prove Proposition 5.1:
Proof. We first show that the term −α2〈∂θiφ, S (∂θtφ− 〈∂θtΦ, S〉)〉) always goes to zero. Expanding we get:
α2 ( C∑ l=1 ∂θiφl(Sl(∂θtφl − C∑ k=1 ∂θtφkSk)) )
= α2∂θiφySy(∂θtφl − C∑ k=1 ∂θtφkSk) + α 2 C∑ l=1,l 6=y ∂θiφl(Sl(∂θtφl − C∑ k=1 ∂θtφkSk))
We now show that each term in the sum goes to zero. Consider l 6= y:
|α2 C∑ k=1 ∂θiφlSl(∂θtφl − ∂θtφkSk)| ≤ α2Sl C∑ k=1 |∂θiφl(∂θtφl − ∂θtφkSk)|by the Triangle Inequality and 0 ≤ Sl ≤ 1
≤ α2SlC We let M = ∑C k=1 |∂θiφl(∂θtφl − ∂θtφkSk)| and note that M < ∞ for all 0 < α < ∞ since 0 ≤ Sk ≤ 1, ∂θiφl, ∂θtφl, ∂θtφk are constants, and it is a finite sum. By Lemma D.1 as α→∞ we have α2SlC → 0 and hence α2 ∑C k=1 ∂θiφlSl(∂θtφl − ∂θtφkSk)→ 0.
We now consider l = y: |α2∂θiφySy(∂θtφy − C∑ k=1 ∂θtφkSk)| ≤ α2|∂θiφy|Sy |∂θtφy − ∂θtφySy)|+ C∑ k=1,k 6=y |∂θtφkSk|
α2|∂θtφy − ∂θtφySy)| = |∂θt |α2T | ∑C s=1,s6=y e
αφs∑C m=1 e αφm |
= |∂θt |α2 C∑
s=1,s 6=y
Ss since Ss > 0
and using similar arguments and Lemma D.1 follows that this term is zero in the limit. It is also obvious that α2 ∑C k=1,k 6=y |∂θtφkSk| goes to zero.
We are left with showing that α(∂θt∂θiφy − 〈∂θt∂θiφ, S〉 goes to zero, this is only guaranteed when y is the true label). We will use the same method as above:
|α(∂θt∂θiφy − 〈∂θt∂θiφ, S〉)| ≤ α(|∂θt∂θiφy − ∂θt∂θiφySy|+ C∑
l=1,l 6=y
|∂θt∂θiφlSl|)
and the result follows using again Lemma D.1. | 1. What is the main contribution of the paper regarding the correlation between flatness and margin?
2. What are the strengths and weaknesses of the experimental results presented in the paper?
3. Do you have any concerns about the arguments made by the authors regarding large-batch optimization and small-batch methods?
4. How do the results of the paper compare to prior works in terms of novelty and significance?
5. Are there any issues with the narrowed margin that violates the policy mentioned in the review? | Review | Review
This paper studies the correlation between the flatness of the converged local minimum and the margin. The authors report experimental results that verify the positive correlation. They suggest using margin-based measures to assess the generalizability. Also, the authors argue that large-batch optimization does not have enough time to maximize margins and hence generalize worse and suggest using it to replace the “misleading folklore” that small-batch methods generalize better because they are able to escape sharp minima. In addition, the authors significantly narrowed the margin which would have violated the policy: “Tweaking the style files may be grounds for rejection.”
Overall, I vote for rejection. The experiments are described in detail and seem correct. However, I was worried that (1) the reported results are not new; and (2) the authors argue existing results are misleading but did not give enough establishment to support their argument. Extraordinary claims require extraordinary evidence. It would be good if the authors can address thesis concerns in the rebuttal session.
Pros:
The authors conduct experiments which verify a positive correlation between the margin and the flatness.
Cons:
The results are not new. It is well-known that (1) margin is a good measure for assessing the generalizability [1-4]; and (2) flatness has a strong correlation with the generalizability as the authors have stated. Combining (1) and (2), it is not surprising margin and flatness has a strong correlation.
The authors argue that large-batch optimization does not have enough time to maximize margins and hence generalize worse. This argument lacks evidence from either theoretical or empirical aspect.
The authors argue that it is a misleading folklore that small-batch methods generalize better because they are able to escape sharp minima, still without evidence. In contrast, this has been established in many works; e.g., Sagun et al. (2017) as the authors mentioned.
The authors significantly narrowed the margin. As stated in the template: “Tweaking the style files may be grounds for rejection.”
Questions: It would be good if the authors can address the cons.
[1] Vladimir Vapnik and Vlamimir Vapnik. Statistical learning theory. Wiley New York, 1:624, 1998.
[2] Peter Bartlett and John Shawe-Taylor. “Generalization performance of support vector machines and other pattern classifiers.” In Advances in Kernel Methods: Support Vector Learning, pages 43–54, 1999.
[3] Vladimir Koltchinskii, Dmitry Panchenko. “Empirical margin distributions and bounding the generalization error of combined classifiers.” The Annals of Statistics, 30(1):1–50, 2002.
[4] Ben Taskar, Carlos Guestrin, and Daphne Koller. “Max-margin Markov networks.” In Advances in Neural Information Processing Systems, pages 25–32, 2004. |
ICLR | Title
On Flat Minima, Large Margins and Generalizability
Abstract
The intuitive connection to robustness and convincing empirical evidence have made the flatness of the loss surface an attractive measure of generalizability for neural networks. Yet it suffers from various problems such as computational difficulties, reparametrization issues, and a growing concern that it may only be an epiphenomenon of optimization methods. We provide empirical evidence that under the cross-entropy loss once a neural network reaches a non-trivial training error, the flatness correlates (via Pearson Correlation Coefficient) well to the classification margins, which allows us to better reason about the concerns surrounding flatness. Our results lead to the practical recommendation that when assessing generalizability one should consider a margin-based measure instead, as it is computationally more efficient, provides further insight, and is highly correlated to flatness. We also use our insight to replace the misleading folklore that smallbatch methods generalize better because they are able to escape sharp minima. Instead we argue that large-batch methods did not have enough time to maximize margins and hence generalize worse.
1 INTRODUCTION
Understanding under which conditions a neural network will generalize from seen to unseen data is crucial, as it motivates design choices and principles which can greatly improve performance. Complexity or generalization measures are used to quantify the properties of a neural network which lead to good generalization. Currently however, established complexity measures such as VC-Dimension (Vapnik, 1998) or Rademacher Complexity (Bartlett & Mendelson, 2002) do not correlate with the generalizability of neural networks (e.g. see Zhang et al. (2016)). Hence many recommendations, such as reducing model complexity, early stopping, or adding explicit regularization are also not applicable or necessary anymore. Therefore, there is an ongoing effort to devise new complexity measures that may guide recommendations on how to obtain models that generalize well.
A popular approach is to consider the flatness of the loss surface around a neural network. Hochreiter & Schmidhuber (1997) used the minimum description length (MDL) argument of Hinton & Van Camp (1993) to claim that the flatness of a minimum can also be used as a generalization measure. Motivated by this new measure Hochreiter & Schmidhuber (1997), and more recently Chaudhari et al. (2019), developed algorithms with explicit regularization intended to converge to flat solutions. Keskar et al. (2016) then presented empirical evidence that flatness relates to improved generalizability and used it to explain the behavior of stochastic gradient descent (SGD) with large and small-batch sizes. Other works since have empirically corroborated that flatter minima generalize better (e.g. Jiang et al. (2019); Li et al. (2018); Bosman et al. (2020)).
There are however various issues that are still unresolved, which makes using flatness for constructing practical deep learning recommendations difficult. For one, flatness is computationally expensive to compute. The most common way to compute the flatness is via the Hessian, which grows quadratically in the number of parameters; this becomes too large when used with modern networks containing millions of parameters. It is also not clear to what extent flatness is a true measure of generalizability, capable of discerning which neural network will or will not generalize. Dinh et al. (2017) showed that reparametrizations affect flatness and a flat model can be made arbitrarily sharp without changing any of its generalization properties. In addition Probably Approximately Correct (PAC-Bayes) bounds that bound the generalizability in terms of the flatness are also either affected
by rescaling, impossible to evaluate or loose (Neyshabur et al., 2017; Arora et al., 2018; Petzka et al., 2020). While there have been solutions attempting to prevent issues around reparametrization (Liang et al., 2019; Tsuzuku et al., 2019), it remains to establish whether flatness is an epiphenomenon of stochastic gradient descent or other complexity measures as Achille et al. (2018) and Jastrzebski et al. (2018) are suggesting. This motivates investigating possible correlations to more well-understood measures of generalization that may help alleviate issues surrounding flat minima, while allowing flat minima to be used when appropriate.
In this paper we will demonstrate a correlation to classification margins, which are a well-understood generalization measure. Margins represent the linearized distance to the decision boundaries of the classification region (Elsayed et al., 2018). An immediate consequence of such a relationship is that to assess generalizability, we could now simply use a computationally cheap and more robust margin based complexity measure. Our contributions will demonstrate further practical implications of the relationship between margins and flatness which open doors to valuable future work such as a better understanding of why and when a model generalizes and more principled algorithm design.
• We prove that under certain conditions flatness and margins are strongly correlated. We do so by deriving the Hessian trace for the affine classifier. Based on its form, we derive an expression in terms of classification margins which we show correlates well with the Hessian trace, with increasing training accuracy for various neural network architectures. By being able relate the two complexity measures, we are now able to provide various practical recommendations, and offer different perspectives on phenomena that may not be explainable without such a view. These are shown in the following contributions.
• We use our insight to replace the misleading folklore that, unlike large-batch methods, small-batch methods are able to escape sharp minima (Keskar et al., 2016). We instead employ a margin perspective and use our empirical results along with recent results by Banburski et al. (2019) and Hoffer et al. (2017) to argue that a large batch method was unable to train long enough to maximize the margins. With our explanation, we help reframe the small and large-batch discussion and build further intuition.
• We show that once a neural network is able to correctly predict the label of every element in the training set it can be made arbitrarily flat by scaling the last layer. We are motivated by the relationship to margins which suffer from the same issue. We highlight this scaling issue because, in some instances, it may still be beneficial for algorithm design to be guided by convergence to flat regions. Hence, we need to account for scaling issues which make it difficult to use flatness to assess whether a network generalizes better than another.
Other works have made connections between flatness and well-behaved classification margins via visualizations (see Huang et al. (2019); Wang et al. (2018)), but they have not demonstrated a quantifiable relationship. Further work has used both the classification margins and flatness to construct PAC-Bayes bounds (Neyshabur et al., 2017; Arora et al., 2018), and have related flatness to increased robustness (Petzka et al., 2020; Borovykh et al., 2019) however they did not show when and to what extent these quantities are related.
We structure the paper as follows. In Section 2, we discuss both our notation and our motivation choosing the cross-entropy loss and the Hessian trace as the flatness measure and provide further background on the classification margins. In Section 3, we present our contribution showing a strong correlation between the margins and flatness by deriving. In Section 4, we combine recent results based on classification margins to offer a different perspective on the misleading folklore on why larger-batch methods generalize worse. In Section 5, we highlight that networks can be made arbitrarily flat. Lastly, we offer our thoughts and future work in the Section 6.
2 PROBLEM SETTING
We first define the basic notation that we use for a classification task. We let X represent the input space and Y = {1, ..., C} the output space where C are the number of possible classes. The network architecture is given by φ : Θ × X → R|Y| where Θ is the corresponding parameter space. We measure the performance of a parameter vector by defining some loss function ` : RC × Y → R. If we have have a joint probability distribution D relating input and output space then we would
like to minimize the expected loss LD(θ) = E(x,y)∼D[`(φ(θ, x), y)]. Since we usually only have access to some finite dataset D, we denote the empirical loss by L̃D(θ) = 1|D| ∑|D| i=1 `(φ(θ, xi), yi). If LD and L̃D are close, then we would say a model generalizes well, as we were able to train on a finite dataset and extrapolate to the true distribution. We will use the cross-entropy loss which is given by `(φ(θ, x), y) = − log(Sy(φ(θ, x))) where the softmax function S : RC → RC is given by S(a)i =
eai∑C j=1 e aj (see Goodfellow et al. (2016)).
The choice of the cross-entropy function as the loss function has a significant impact on how the flatness measure behaves. Unlike the multiclass mean squared error (MMSE), exponential type losses such as the cross-entropy loss on neural networks have been shown to include implicit regularization which leads to margin maximizing solutions for neural networks (Banburski et al., 2019). Also, various properties for flat minima which have been proven for the MMSE loss by Mulayoff & Michaeli are not applicable to the cross-entropy loss, further highlighting the fundamental differences between the loss functions. While the MMSE loss has shown some promise for many classification tasks (Hui & Belkin, 2020) the cross-entropy loss is still the loss which is most used and was primarily used for the empirical evidence around flat minima (Keskar et al., 2016; Chaudhari et al., 2019), which motivates our choice.
The qualitative description of a flat region was given by Hochreiter & Schmidhuber (1997) as “a large connected region in parameter space where the error remains approximately constant". We measure the flatness by the trace of the Hessian of the loss with respect to the parameters (in short the Hessian trace) denoted by Tr(Hθ(L̃D(θ)) (Dinh et al., 2017). Since the Hessian is symmetric, the Hessian trace is equivalent to the sum of its eigenvalues which for a fixed parameter space is proportional to the expected increase of the second order approximation of the loss around a fixed minimum θ in a random direction θ′ with θ′ ∼ N (θ, I). Since we apply flatness arguments only close to minima, we assume that all eigenvalues are positive and that the Hessian trace is a good measure of flatness Sagun et al. (2017). Even though the Hessian is only an approximation of flatness, the Hessian is often preferred as it allows us to reason about various directions in parameter space via its eigenvectors and eigenvalues (see Sagun et al. (2017); Chaudhari et al. (2019)) and alleviates the issue of infinitely long but sharp ridges making a minimum infinitely flat (Dinh et al., 2017; Freeman & Bruna, 2016). The Hessian has also been linked to feature robustness via its use in the second order approximation of the loss (e.g. Petzka et al. (2020); Borovykh et al. (2019)) and is a promising quantity to relate to the margins.
As we are working with non-linear functions it is intractable to compute exact distances to the decision boundary, therefore we use a measure which is related to the linearized distance as described in Elsayed et al. (2018). Under this view, larger margins are better because the data is further from the decision boundary. Specifically, we define the margins as in Neyshabur et al. (2017): for some vector v ∈ RC and label y we let the margin of v be γ(v, y) = |vy − maxj 6=y vj |. Since we use the margin in different contexts we define the output margins γ(φ(θ, x), y) and the margins of the model output after the softmax layer γ(S(φ(θ, x)), y). Due to the intuition of margins relating to the regularity of the classification regions, they have been proven and shown to be a good generalization measure for linear networks (Langford & Shawe-Taylor, 2003) and later for neural networks (see Bartlett et al. (2017); Jiang et al. (2018; 2019)) when correctly adjusted. Due to results by Banburski et al. (2019) and Soudry et al. (2018), Poggio et al. (2019) claimed that a large part of the mystery around generalizability has been solved, since standard optimization methods are maximizing the margin instead of memorizing data.
3 THE MARGIN AND HESSIAN TRACE RELATIONSHIP
3.1 THE AFFINE CROSS-ENTROPY HESSIAN TRACE
Generally, it is difficult to derive a closed form solution of the Hessian trace due to the non-linear nature of neural networks. To gain insight into what may determine the flatness or sharpness of a solution we consider an affine prediction function for which we derive the following simple and insightful expression for the Hessian trace:
Proposition 3.1 (Affine Cross-Entropy Hessian Trace (ACEHT)). Assume an affine predictor given by φ((θ, b), x) = θx+ b where (θ, b) ∈ RC×d × RC = Θ. Then the trace of the Hessian under the cross-entropy loss assuming our predictor function is:
Tr(H(`(φ((θ, b), x), y))) = (|x|2 + 1)(1− C∑ j=1 S2j (φ(Θ, x))
= (|x|2 + 1)(1− |S(φ(Θ, x))|2).
The derivation is in Appendix C. We immediately observe that the trace of the Hessian is a product of both the size of the input and 1 − η(S(φ(θ, x))) where η(S(φ(θ, x))) = ∑C j=1 S 2 j (φ(Θ, x)), where we can view 1 − η(S(φ(θ, x))) as a confidence measure. In the visualization provided in Figure 1 we clearly see that 1 − η(S(φ(θ, x))) is only zero when the predictor predicts one class with probability 1, regardless of whether it is the correct class or not. When the model is least confident, namely when every entry is predicted with probability 1/C, then 1 − η(S(φ(θ, x))) is also highest. Hence, in the affine case with a cross-entropy loss the Hessian trace can be seen as an indication of the model confidence in its prediction. This confidence interpretation is also connected to classification margins by observing that Sy ≥ γ(S(φ(θ, x)), y) and hence (1 − ∑C j=1 S 2 j (φ(Θ, x)) ≤ 1 − S2y((φ(Θ, x))) ≤ 1 − γ2(S(φ(θ, x)), y). Therefore, if the margins are large then the region will also be flat. The intuition for this is that the error in the upper bound becomes smaller as Sy becomes larger, i.e. when the model predicts correctly and confidently. We will also provide evidence for a converse, i.e. a flat minimum has large margins, in the following experimental sections. Finally, we note that without the expression in Proposition 3.1 we would not have been able to derive the upper bound 1− γ2(S(φ(θ, x)), y) without guesswork.
3.2 EXTENSION TO THE NON-LINEAR CASE
Now we will attempt to extend the derivation of the previous section to the non-linear case. This is a challenging undertaking so we will resort to numerical evidence. To extend the results from the affine case we will consider both the ACEHT and the upper bound ACEHT (S(φ(θ, x)))) ≤ |x|(1 − γ2(S(φ(θ, x)), y)) to which we refer as the "margin bound". We will compare both quantities to the empirically derived Hessian trace. To compute the empirical Hessian trace we use the PyHessian package (Yao et al., 2019) which implements Hutchinson’s method (Bai et al., 1996; Avron & Toledo, 2011).
To compare the quantities we will compare them in terms of their distributions over the data. Specifically, let (X,Y ) ∼ D and fix θ then we compute the Pearson Correlation Coefficient (rvalue) (Lee Rodgers & Nicewander, 1988) between the random variables Tr(H(`(φ(θ,X)), Y )) and ACEHT (S(φ(θ,X))) and similarly for the margin bound. The choice of the r-value is natural because in the affine case the ACEHT and the Hessian trace are equivalent, therefore a linear relationship should be expected. Our method is also more general than just comparing some statistic, such as the average (which is generally used for flatness measures), of the above random variables.
For example, while the smallest margin over the dataset is commonly used a generalization measure (Bartlett et al., 2017; Jiang et al., 2019; Neyshabur et al., 2017), Jiang et al. (2018) showed that higher moments of the distribution are a much better predictor for generalizability as we will also see in Section 4.
Figure 2 is an examples of such a fit for an affine predictor. While the high r-value of 0.97 confirms our analytic results, we also observe that the fit is not perfect, as would be expected due to the exact relationship. The inaccuracies are due to the numerical methods used and become more pronounced the higher the Hessian trace is. To avoid outliers heavily impacting the linear regression model in the non-linear case, we will use the SciPy function LocalOutlierFactor (Breunig et al., 2000) to remove outliers before fitting the line. With this we prevent hand picking points to skew the results and will also stabilize our results.
3.2.1 EMPIRICAL EVIDENCE
We present our results using the convolutional neural network LeNet on the MNIST dataset as they are representative of what we have observed on other architectures, hyperparameters, and datsets (see Appendix B). Our results use stochastic gradient descent with a fixed learning rate and batch size to achieve an appropriate performance on the classification task. Because of the computational difficulty of computing the empirical Hessian trace for every single element in the input data, we consider 1,000 randomly selected datapoints from the training-set. To highlight the computational difficulty of using even very optimized numerical tools, such as PyHessian, we note that it takes us roughly 1,5 hours to compute the Hessian trace for the whole MNIST dataset while it only takes 5 seconds for the margins.
In Figure 3 we present the plots related to the correlation of the empirical Hessian trace to the ACEHT and margin bound over the randomly sampled datapoints. Figures 3a and 3b show that for most of training, the correlation is between 0.8 and 1. Combining Figures 3a and 3c it can be seen that the r-value increases with the model training accuracy. Furthermore, the datapoint which are incorrectly predicted do not show a correlation.
With that we confirm the intuition that indeed, flatter solution are more robust and have larger margins. While we have found flatness and margins to be highly correlated in scenarios in which others have identified flatness to be a good generalization measure (Jiang et al., 2019; Keskar et al., 2016; Chaudhari et al., 2019), it may just be that this is also an epiphenomenon of stochastic gradient descent or some other process and there may be situations in which the relationship does not hold. However, our general advice to consider margins more is not impacted by this. In the scenario where generalizability and flatness have been linked, we have also shown that margins and flatness are correlated, hence it is advantageous to use margins instead due to computational reasons or for more complete intuition. The only situation in which it is more likely that margins and flatness are not correlated is when flatness has not yet been linked to generalizability. In such a situation it may also be better to use the better understood margin measure instead of using a flatness measure to assess generalizability. In the next section we will consider the first case, where we examine a general scenario in which flatness has been used to reason about generalizability and offer a more insightful margin perspective.
4 PERSPECTIVE ON LARGE AND SMALL-BATCH METHODS
We now show how our results lead to a better understanding of phenomena which have been misleadingly attributed to flat minima. To do so, we consider the experiments which rekindled the debate around flat minima by Keskar et al. (2016), where flatness was used to explain why smallbatch methods tend to generalize better than large-batch methods. The idea was that small-batch methods converge to flatter minima due to them being able to "escape" sharp minima more easily. However, it has been shown that the minima of both methods appear to be in the same attractive basin (Sagun et al., 2017; Freeman & Bruna, 2016; Draxler et al., 2018), meaning that small-batch methods do not seem to escape any attractive basin but are merely in a different area of the same attractive basin. While the results gave credence to flatter minima generalizing better, flatter minima do not seem to provide the full picture for why large-batch methods tend to do worse and we believe that an explanation in terms of the margins is more illuminating.
4.1 EXPERIMENT SETUP
We will replicate the experiment by Keskar et al. (2016) for a fully connected network with batchnormalized layers on the MNIST dataset as described in Appendix A. We chose the large-batch size to be 4096 and the small-batch size to be 256. To have a fair comparison, we use the same seed and take 10,000 gradient steps for both methods, instead of basing the stopping time on epochs. We also used stochastic gradient descent without Momentum. With our setup we observe a similar phenomenon as Keskar et al. (2016) in Table 1. The small and large-batch method both attain the same training accuracy and comparable training loss. However, the small-batch method is at a considerably flatter minimum and generalizes better than the large-batch method. We will now show that instead of considering the flatness, it would be more insightful to consider margins to explain the difference in generalizability.
While the upper bound of ACEHT is in terms of the softmax margins, we consider the output margins in this section. The reason is that most margin based generalization measures use the output margins. Another more practical reason is that towards the end of training, the softmax margins are all very close to 1 making it difficult to visualize and observe the distribution. We also do not use a normalized version of the margins (such as Bartlett et al. (2017); Jiang et al. (2018)). Our reasoning
is that because we use the same architecture, the same dataset, and train in a similar manner the margin distributions will be comparable.
4.2 A MARGIN PERSPECTIVE ON LARGE AND SMALL-BATCH SIZES
In Figure 4 we see that the output margins and the Hessian trace are correlate as expected from Section 3. We can also roughly see that the small-batch method has fewer low margins than the largebatch methods. To emphasize this difference we consider Figure 4c where we plot the histogram and box-plot of the output margin distribution for both the large-batch and small-batch method. We also display the skewness of each, which is the third moment centered around the mean. The box-plots and the skewness confirm that the small-batch method is dominated by large margins indicating better generalizability (as discussed in Bartlett et al. (2017); Jiang et al. (2018)). The idea with a left-skewed margin distribution is that the tail with low margin datapoints is mostly compromised of outliers and will not massively affect the robustness to input perturbations. This soft-margin SVM perspective is in contrast to hard-margin SVMs where the margin is defined to be the minimum of all the distances to the decision boundary (Shalev-Shwartz & Ben-David, 2014). If a hard-margin view was adopted, then the small-batch method would be predicted to generalize worse, because it has the smallest margin as we see in Figure 4c. However, the distribution of the small-batch method is also more left skewed, which would point to this minimum being an outlier rather than being indicative of generalizability.
We now want to explain why the small-batch method generalizes well. As observed in Jastrzębski et al. (2017) a smaller batch-size is similar to a larger learning-rate, hence at every step the process will advance further than a large batch-method would. It has already been noted by Hoffer et al. (2017) that training longer leads the large-batch method to generalize just as well as the small-batch method because it had time to "catch up", even though the decrease in training loss may be barely noticeable. We have also seen that SGD converges to margin maximizing solutions by Banburski et al. (2019). Therefore, a method that is able to train or advance further, will also be closer to a margin maximizing solution. We therefore expect that large-batch methods not having had enough time to maximize margins is the driving force behind the large vs small-batch phenomenon.
5 BECOMING FLATTER WITH INCREASING MARGINS
Reparametrization problems such as shown by Dinh et al. (2017) are neither a new phenomenon nor should they necessarily discourage the design of algorithms which attempt to find flat minima. Rather they inform on what aspects of a generalization measure need to be adjusted to allow them to be used in a practical setting. For SVMs, the problem of scaling the hyperplane normal to increase margins of correctly classified points is solved by scaling the normal to make it a unit vector, transforming the functional margin into the geometric margin (Shalev-Shwartz & Ben-David, 2014). In the case of neural networks, it is also known that scaling the last layer leads to an increase in the margins for data which has been correctly predicted (Neyshabur et al., 2017). This scaling issues has been successfully addressed (see Bartlett et al. (2017); Elsayed et al. (2018); Jiang et al. (2018)). Due to the relationship to the classification margins it is natural to ask if flatness suffers from a similar problem. We confirm this with the following Proposition: Proposition 5.1. For a given neural network φ let Tα : Θ → Θ be such that for all x ∈ X and θ ∈ Θ we have φ(Tα(θ), x) = αφ(θ, x). Now assume that θ′ ∈ Θ and a datapoint (x′, y′) for which argmaxk∈{1,...,C}(φ(θ, x ′))k = y ′ then
∀s, t ∈ {1, ..., dim(Θ)} lim α→∞ ∂θs∂θt`(φ(Tα(θ ′), x′), y′) = 0. (1)
The proof is in the Appendix D. From the Proposition we immediately derive the following Corollary: Corollary 5.2. Assume that φ and θ predict every datapoint in a set D correctly then
∀s, t ∈ {1, ..., dim(Θ)} lim α→∞ ∂θs∂θtLD(Tα(θ)) = 0. (2)
Due to the Corollary, if a network has achieved full training accuracy, then the network is equivalent under the Tα transformation to an arbitrarily flat network. We note that there exists such a Tα transform for most networks. Scaling the last layer is one simple instance of such a transform. Another is that for fully connected and convolutional networks with ReLU non-linearities we observe that by the non-negative homogeneity scaling each layer also results in a valid Tα transformation. The crucial property of the Tα map is that it does not change the relative order of the model outputs and therefore, given two networks which have achieved full training accuracy we can not determine which network should generalize better based solely on the flatness of the local-geometry. We note that Banburski et al. (2019) mentioned such an issue but they did not discuss it in the context of flat minima and their arguments relied on further structure which we believe is less illuminating than our presentation and proofs.
6 CONCLUSIONS
In this paper, we have related flatness to the classification margins in a principled manner, in contrast to other works that have made a more intuitive or less quantifiable connection (Huang et al., 2019; Wang et al., 2018; Neyshabur et al., 2017; Petzka et al., 2020). Our results lead to the immediate practical recommendation of using margins instead of the computationally expensive flatness to assess generalizability. We also use our results to replace the misleading notion that small-batch methods generalize better because they "escape" sharp minima, instead arguing that small-batch methods have more time to maximize margins. We were also motivated by the flatness and margin relationship to highlight that neural networks can be made arbitrarily flat. This implies that the generalizability of two networks can not be distinguished based on flatness and hence needs to be addressed to make flatness a viable generalization measure. Based on our results, future work may assess whether flatness is an epiphenomenon of the optimization methods, because now recent work on margins (e.g. Banburski et al. (2019); Soudry et al. (2018)) can be applied to reason about flatness. Furthermore, by relating properties of the parameter space (flatness) to properties of the input space (margin) there is now an opportunity to further explore results such as by Sagun et al. (2017), where they found that the Hessian, with respect to the parameters of a neural network upon convergence, has as many positive eigenvalues as the number of classes in the dataset used. Overall, our results enable more principled discussion on how flatness may contribute to generalizability.
A APPENDIX: NETWORK ARCHITECTURE AND DATASETS
A.1 NETWORK ARCHITECTURE
We implement the convolutional neural network LeNet-5 as described in LeCun et al. (1998). Our fully connected neural network with batch normalized layers (FCNBN) is inspired by Keskar et al. (2016). It has a 784-dimensional (MNIST) or 1024-dimensional (CIFAR10) input layer followed by three batch-normalized Ioffe & Szegedy (2015) layers with ReLU non-linearities and a 10- dimensional output layer.
A.2 DATASETS
B APPENDIX: FLATNESS AND MARGIN CORRELATION
Here we present further evidence of the flatness and margin correlation discussed in Section 3. Like in Section 3 we have used appropriate learning rates and batch sizes to get a reasonable performance for the task, and have observed our results to hold for different hyperparameters. One instance where we demonstrate two different batch-sizes is for the Fully Connected Network with Batch Normalization on MNIST (Section B.1) where we present results for a batch size of 256 and 4096. We again only consider 1,000 randomly selected datapoints from the training-set due to the computational difficult of computing the Hessian trace. If the network achieves full training accuracy and there are no incorrectly classified datapoints, we set the r-value to zero.
Overall, we observe the same results as in Section 3 and a correlation between 0.8 and 1. As before, the correlation increases with increasing training accuracy for correctly predicted datapoints.
B.1 FULLY CONNECTED NETWORK WITH BATCH NORMALIZATION ON MNIST
B.1.1 BATCH SIZE: 256
(a) ACEHT to HT (b) SM Bound to HT (c) Training Accuracy
(d) Initialization (e) Step 5000 (f) Final Step (10,000)
B.1.2 BATCH SIZE: 4096
(a) ACEHT to HT (b) SM Bound to HT (c) Training Accuracy
(d) Initialization (e) Step 5000 (f) Final Step (10,000)
B.2 LENET ON CIFAR10
(a) ACEHT to HT (b) SM Bound to HT (c) Training Accuracy
(d) Initialization (e) Step 5000 (f) Final Step (10,000)
B.2.1 FULLY CONNECTED NETWORK WITH BATCH NORMALIZATION ON CIFAR10
(a) ACEHT to HT (b) SM Bound to HT (c) Training Accuracy
(d) Initialization (e) Step 5000 (f) Final Step (10,000)
C APPENDIX: DERIVATIVES OF THE CROSS-ENTROPY LOSS
C.1 GENERAL FORM
For the general form we consider the cross-entropy loss for a predictor function which is scaled by some scalar α. Specifically, we assume an arbitrary input-output pair (x, y) ∈ X × Y and will compute the partial derivatives with respect to the parameters θ of the predictor function αφ(θ, x). Since the equations can become very long we will declutter the notation by letting S = S(αφ(θ, x)), φ = φ(θ, x) and for two d-dimensional vectors x, y ∈ Rd we write 〈x, y〉 = ∑d i=1 xiyi. We also denote elementwise multiplication by and let Φ be a matrix such that (Φ)ij = φj . Lemma C.1. The first partial derivative of the cross-entropy loss with respect to an element θi is:
∂θi`(αφ(θ, x), y) = −α(∂θiφy − C∑ l=1 ∂θiφlSl(φ))
= −α(∂θiφy − 〈∂θiφ, S〉).
Proof. ∂θi`(αφ, y) = ∂θi − log(Sy)
= − 1 Sy ∂θiSy (3)
With some manipulation we compute ∂θiSy:
∂θiSy = ∂θie αφy∑C k=1 e αφk − e αφy ( ∑C k=1 e αφk)2 C∑ l=1 ∂θie αφl
= α∂θiφy eαφy∑C k=1 e αφk − α e αφy∑C k=1 e αφk C∑ l=1 eαφl∑C k=1 e αφk ∂θiφl
= Syα(∂θiφy − C∑ l=1 ∂θiφlSl). (4)
Combining Equations 3 and 4 we obtain Lemma C.1:
∂θi`(αφ, y) = −α(∂θiφy − C∑ l=1 ∂θiφlSl).
Lemma C.2. The second partial derivative of the cross-entropy loss with respect to elements θs and θt is:
∂θt∂θs`(αφ(θ, x), y) = −α(∂θt∂θsφy − 〈∂θt∂θsφ, S〉 − 〈∂θsφ, αS (∂θtφ− (∂θtΦ)S)〉).
Proof. Differentiating the first order derivative given by Lemma C.1 we obtain by the multi-variable chain rule:
∂θt∂θs`(αφ, y) = −α(∂θt∂θsφy − 〈∂θt∂θsφ, S〉 − 〈∂θsφ, ∂θtS〉). (5)
To compute ∂θtS in Equation 5 we use Equation 4 and obtain:
(∂θtS)i = ∂θtSi
= Siα(∂θtφi − 〈∂θtφ, S〉),
which after some simplification reduces to:
∂θtS(φ) = αS (∂θtφ− (∂θtΦ)S). (6)
Combining Equations 5 and 6 we obtain Lemma C.2:
∂θt∂θs`(αφ, y) = −α(∂θt∂θsφy − 〈∂θt∂θsφ, S〉 − 〈∂θsφ, αS (∂θtφ− (∂θtΦ)S)〉).
C.2 AFFINE CROSS-ENTROPY HESSIAN TRACE
We now present the proof of Proposition 3.1:
Proof. Throughout the proof we make use of Lemma C.2 and let α = 1. We also notice that any second derivative with respect to φ((θ, b), x) is zero since φ is an affine classifier.
We first consider the derivatives with respect to elements of θ where we use θi,j to denote the element in the ith row and jth column of the matrix θ. Notice that ∂θi,jφ = xjei which we write as x i j . The second order derivatives are given by:
∂θi,j∂θs,t`(φ, y) = 〈xst , S (xij − xjSi1)〉 = −xt(Ss((xij)s − xjSi)),
when computing the trace we only compute the elements on the diagonal and hence we get:
∂θi,j∂θi,j `(φ, y) = xj(Si(xj − xjSi)) = x2jSi(1− Si)
Now we consider derivatives with respect to elements of b and notice that ∂biφ = ei. For the second derivative we then get:
∂bi∂bj `(φ, y) = 〈ej , S (ei − eiSi)〉 = δijSi(1− Si).
Finally, summing up the diagonal of the total Hessian we get: Tr(H(l(Θ, x, y))) = (|x|2 + 1)(1− ∑ j S2j )
where we used the fact that ∑ i Si = 1.
D APPENDIX: SCALING PROOF
To prove Proposition 5.1 we first prove the following lemma: Lemma D.1. Assume that the argmax of φ is the correct class y and is unique then for k ∈ N, k ≥ 1 and i 6= y:
lim α→∞
αkSi(αφ) = 0 (7)
Proof. Let y be such that φy = maxk∈{1,...,C}φk. For i 6= y we have:
lim α→∞ αk eαφi∑C k=1 e αφk = lim α→∞
αk eα(φy−φi) + ∑C k=1,k 6=y e α(φk−φi)
= lim α→∞
k! (φy − φi)keα(φy−φi) + ∑C k=1,k 6=y(φk − φi)keα(φk−φi)
where the last line follows from applying L’Hopital’s rule k times. Since we assumed that y is the only y ∈ {1, ..., N} such that φy = maxk∈{1,...,N φk, we have that φk < φy for all k 6= y. Hence, as α→∞ we have eα(φk−φy) → 0. Therefore:
lim α→∞
k!eα(φi−φy) (φy − φi)k + ∑C k=1,k 6=y(φk − φi)keα(φk−φy) = 0
We are now ready to prove Proposition 5.1:
Proof. We first show that the term −α2〈∂θiφ, S (∂θtφ− 〈∂θtΦ, S〉)〉) always goes to zero. Expanding we get:
α2 ( C∑ l=1 ∂θiφl(Sl(∂θtφl − C∑ k=1 ∂θtφkSk)) )
= α2∂θiφySy(∂θtφl − C∑ k=1 ∂θtφkSk) + α 2 C∑ l=1,l 6=y ∂θiφl(Sl(∂θtφl − C∑ k=1 ∂θtφkSk))
We now show that each term in the sum goes to zero. Consider l 6= y:
|α2 C∑ k=1 ∂θiφlSl(∂θtφl − ∂θtφkSk)| ≤ α2Sl C∑ k=1 |∂θiφl(∂θtφl − ∂θtφkSk)|by the Triangle Inequality and 0 ≤ Sl ≤ 1
≤ α2SlC We let M = ∑C k=1 |∂θiφl(∂θtφl − ∂θtφkSk)| and note that M < ∞ for all 0 < α < ∞ since 0 ≤ Sk ≤ 1, ∂θiφl, ∂θtφl, ∂θtφk are constants, and it is a finite sum. By Lemma D.1 as α→∞ we have α2SlC → 0 and hence α2 ∑C k=1 ∂θiφlSl(∂θtφl − ∂θtφkSk)→ 0.
We now consider l = y: |α2∂θiφySy(∂θtφy − C∑ k=1 ∂θtφkSk)| ≤ α2|∂θiφy|Sy |∂θtφy − ∂θtφySy)|+ C∑ k=1,k 6=y |∂θtφkSk|
α2|∂θtφy − ∂θtφySy)| = |∂θt |α2T | ∑C s=1,s6=y e
αφs∑C m=1 e αφm |
= |∂θt |α2 C∑
s=1,s 6=y
Ss since Ss > 0
and using similar arguments and Lemma D.1 follows that this term is zero in the limit. It is also obvious that α2 ∑C k=1,k 6=y |∂θtφkSk| goes to zero.
We are left with showing that α(∂θt∂θiφy − 〈∂θt∂θiφ, S〉 goes to zero, this is only guaranteed when y is the true label). We will use the same method as above:
|α(∂θt∂θiφy − 〈∂θt∂θiφ, S〉)| ≤ α(|∂θt∂θiφy − ∂θt∂θiφySy|+ C∑
l=1,l 6=y
|∂θt∂θiφlSl|)
and the result follows using again Lemma D.1. | 1. What is the main contribution of the paper regarding flat minima and large margins in neural networks?
2. What are the strengths of the paper, particularly in explaining the interplay between these two factors?
3. Do you have any concerns or questions about the claims made in the paper, especially those based on folklore and beliefs without empirical or theoretical confirmation?
4. Are there any typos or errors in the theoretical analysis, specifically in Proposition 3.1 and its discussion?
5. What are your thoughts on the message in Section 5, particularly regarding the impact of simple scaling on the margin and generalization?
6. How might the paper be improved, such as providing additional experiments to support the observations or using a standard ICLR template for formatting? | Review | Review
----------- Overview of the paper --------------
This paper studies the correlation of flat minima and large margins, as well as their impact on the generalization abilities of neural networks. The main tool in the main is the relation between the trace of Hessian and the functional margin. The empirical study in the paper reveals that both in linear and nonlinear settings, there exists an intimate connection between the trace of Hessian and the margin. This idea is further utilized to explain learning with large/small batch sizes.
----------- Contributions and strength--------------
Properties of the stationary point found by training algorithms (e.g., SGD) are sweet spot topics, especially in deep learning. Flatness and the corresponding margins are believed to be influential factors on the generalizability of neural networks. The paper makes efforts in explaining the interplay of these two factors, which is helpful in forming a deep understanding of neural networks. The message in the paper is rather clear.
----------- Weakness --------------
Some claims in the paper are conjectures and lack a serious treatment. For example, in section 4, the margin perspective on large and small-batch sizes are based on some folklore and beliefs without empirical/theoretical confirmation. The experiments in section 4 is used to support the observation in existing works, rather not for further explanation of the observation. Extra experiments corroborate that ``large-batch methods not having had enough time to maximize margins is the driving force behind the large-batch vs small-batch phenomenon'' should be provided.
For the theory part, there seem to be some typos.
----------- Questions and comments --------------
In proposition 3.1 and the following discussion, \eta is undefined or it should be replaced by \gamma. It may be better to write the bound using the margin in the proposition statement. I am also curious why there is a dependence on the norm of input in the trace of the Hessian, and what are the implications of such a dependence?
I am a bit confused of the message in section 5. The concluding remark seems to argue that when the data is perfectly classified, simple scaling can alter the margin yet does not change the generalization. I think this is a direct consequence of the positive homogeneous property of classifiers.
The paper is prepared in a nonstandard ICLR template. The margin of the current draft is much smaller than the ICLR template. Although the content is fit in 8 pages, it will definitely be around 10 pages in the ICLR template. |
ICLR | Title
On Flat Minima, Large Margins and Generalizability
Abstract
The intuitive connection to robustness and convincing empirical evidence have made the flatness of the loss surface an attractive measure of generalizability for neural networks. Yet it suffers from various problems such as computational difficulties, reparametrization issues, and a growing concern that it may only be an epiphenomenon of optimization methods. We provide empirical evidence that under the cross-entropy loss once a neural network reaches a non-trivial training error, the flatness correlates (via Pearson Correlation Coefficient) well to the classification margins, which allows us to better reason about the concerns surrounding flatness. Our results lead to the practical recommendation that when assessing generalizability one should consider a margin-based measure instead, as it is computationally more efficient, provides further insight, and is highly correlated to flatness. We also use our insight to replace the misleading folklore that smallbatch methods generalize better because they are able to escape sharp minima. Instead we argue that large-batch methods did not have enough time to maximize margins and hence generalize worse.
1 INTRODUCTION
Understanding under which conditions a neural network will generalize from seen to unseen data is crucial, as it motivates design choices and principles which can greatly improve performance. Complexity or generalization measures are used to quantify the properties of a neural network which lead to good generalization. Currently however, established complexity measures such as VC-Dimension (Vapnik, 1998) or Rademacher Complexity (Bartlett & Mendelson, 2002) do not correlate with the generalizability of neural networks (e.g. see Zhang et al. (2016)). Hence many recommendations, such as reducing model complexity, early stopping, or adding explicit regularization are also not applicable or necessary anymore. Therefore, there is an ongoing effort to devise new complexity measures that may guide recommendations on how to obtain models that generalize well.
A popular approach is to consider the flatness of the loss surface around a neural network. Hochreiter & Schmidhuber (1997) used the minimum description length (MDL) argument of Hinton & Van Camp (1993) to claim that the flatness of a minimum can also be used as a generalization measure. Motivated by this new measure Hochreiter & Schmidhuber (1997), and more recently Chaudhari et al. (2019), developed algorithms with explicit regularization intended to converge to flat solutions. Keskar et al. (2016) then presented empirical evidence that flatness relates to improved generalizability and used it to explain the behavior of stochastic gradient descent (SGD) with large and small-batch sizes. Other works since have empirically corroborated that flatter minima generalize better (e.g. Jiang et al. (2019); Li et al. (2018); Bosman et al. (2020)).
There are however various issues that are still unresolved, which makes using flatness for constructing practical deep learning recommendations difficult. For one, flatness is computationally expensive to compute. The most common way to compute the flatness is via the Hessian, which grows quadratically in the number of parameters; this becomes too large when used with modern networks containing millions of parameters. It is also not clear to what extent flatness is a true measure of generalizability, capable of discerning which neural network will or will not generalize. Dinh et al. (2017) showed that reparametrizations affect flatness and a flat model can be made arbitrarily sharp without changing any of its generalization properties. In addition Probably Approximately Correct (PAC-Bayes) bounds that bound the generalizability in terms of the flatness are also either affected
by rescaling, impossible to evaluate or loose (Neyshabur et al., 2017; Arora et al., 2018; Petzka et al., 2020). While there have been solutions attempting to prevent issues around reparametrization (Liang et al., 2019; Tsuzuku et al., 2019), it remains to establish whether flatness is an epiphenomenon of stochastic gradient descent or other complexity measures as Achille et al. (2018) and Jastrzebski et al. (2018) are suggesting. This motivates investigating possible correlations to more well-understood measures of generalization that may help alleviate issues surrounding flat minima, while allowing flat minima to be used when appropriate.
In this paper we will demonstrate a correlation to classification margins, which are a well-understood generalization measure. Margins represent the linearized distance to the decision boundaries of the classification region (Elsayed et al., 2018). An immediate consequence of such a relationship is that to assess generalizability, we could now simply use a computationally cheap and more robust margin based complexity measure. Our contributions will demonstrate further practical implications of the relationship between margins and flatness which open doors to valuable future work such as a better understanding of why and when a model generalizes and more principled algorithm design.
• We prove that under certain conditions flatness and margins are strongly correlated. We do so by deriving the Hessian trace for the affine classifier. Based on its form, we derive an expression in terms of classification margins which we show correlates well with the Hessian trace, with increasing training accuracy for various neural network architectures. By being able relate the two complexity measures, we are now able to provide various practical recommendations, and offer different perspectives on phenomena that may not be explainable without such a view. These are shown in the following contributions.
• We use our insight to replace the misleading folklore that, unlike large-batch methods, small-batch methods are able to escape sharp minima (Keskar et al., 2016). We instead employ a margin perspective and use our empirical results along with recent results by Banburski et al. (2019) and Hoffer et al. (2017) to argue that a large batch method was unable to train long enough to maximize the margins. With our explanation, we help reframe the small and large-batch discussion and build further intuition.
• We show that once a neural network is able to correctly predict the label of every element in the training set it can be made arbitrarily flat by scaling the last layer. We are motivated by the relationship to margins which suffer from the same issue. We highlight this scaling issue because, in some instances, it may still be beneficial for algorithm design to be guided by convergence to flat regions. Hence, we need to account for scaling issues which make it difficult to use flatness to assess whether a network generalizes better than another.
Other works have made connections between flatness and well-behaved classification margins via visualizations (see Huang et al. (2019); Wang et al. (2018)), but they have not demonstrated a quantifiable relationship. Further work has used both the classification margins and flatness to construct PAC-Bayes bounds (Neyshabur et al., 2017; Arora et al., 2018), and have related flatness to increased robustness (Petzka et al., 2020; Borovykh et al., 2019) however they did not show when and to what extent these quantities are related.
We structure the paper as follows. In Section 2, we discuss both our notation and our motivation choosing the cross-entropy loss and the Hessian trace as the flatness measure and provide further background on the classification margins. In Section 3, we present our contribution showing a strong correlation between the margins and flatness by deriving. In Section 4, we combine recent results based on classification margins to offer a different perspective on the misleading folklore on why larger-batch methods generalize worse. In Section 5, we highlight that networks can be made arbitrarily flat. Lastly, we offer our thoughts and future work in the Section 6.
2 PROBLEM SETTING
We first define the basic notation that we use for a classification task. We let X represent the input space and Y = {1, ..., C} the output space where C are the number of possible classes. The network architecture is given by φ : Θ × X → R|Y| where Θ is the corresponding parameter space. We measure the performance of a parameter vector by defining some loss function ` : RC × Y → R. If we have have a joint probability distribution D relating input and output space then we would
like to minimize the expected loss LD(θ) = E(x,y)∼D[`(φ(θ, x), y)]. Since we usually only have access to some finite dataset D, we denote the empirical loss by L̃D(θ) = 1|D| ∑|D| i=1 `(φ(θ, xi), yi). If LD and L̃D are close, then we would say a model generalizes well, as we were able to train on a finite dataset and extrapolate to the true distribution. We will use the cross-entropy loss which is given by `(φ(θ, x), y) = − log(Sy(φ(θ, x))) where the softmax function S : RC → RC is given by S(a)i =
eai∑C j=1 e aj (see Goodfellow et al. (2016)).
The choice of the cross-entropy function as the loss function has a significant impact on how the flatness measure behaves. Unlike the multiclass mean squared error (MMSE), exponential type losses such as the cross-entropy loss on neural networks have been shown to include implicit regularization which leads to margin maximizing solutions for neural networks (Banburski et al., 2019). Also, various properties for flat minima which have been proven for the MMSE loss by Mulayoff & Michaeli are not applicable to the cross-entropy loss, further highlighting the fundamental differences between the loss functions. While the MMSE loss has shown some promise for many classification tasks (Hui & Belkin, 2020) the cross-entropy loss is still the loss which is most used and was primarily used for the empirical evidence around flat minima (Keskar et al., 2016; Chaudhari et al., 2019), which motivates our choice.
The qualitative description of a flat region was given by Hochreiter & Schmidhuber (1997) as “a large connected region in parameter space where the error remains approximately constant". We measure the flatness by the trace of the Hessian of the loss with respect to the parameters (in short the Hessian trace) denoted by Tr(Hθ(L̃D(θ)) (Dinh et al., 2017). Since the Hessian is symmetric, the Hessian trace is equivalent to the sum of its eigenvalues which for a fixed parameter space is proportional to the expected increase of the second order approximation of the loss around a fixed minimum θ in a random direction θ′ with θ′ ∼ N (θ, I). Since we apply flatness arguments only close to minima, we assume that all eigenvalues are positive and that the Hessian trace is a good measure of flatness Sagun et al. (2017). Even though the Hessian is only an approximation of flatness, the Hessian is often preferred as it allows us to reason about various directions in parameter space via its eigenvectors and eigenvalues (see Sagun et al. (2017); Chaudhari et al. (2019)) and alleviates the issue of infinitely long but sharp ridges making a minimum infinitely flat (Dinh et al., 2017; Freeman & Bruna, 2016). The Hessian has also been linked to feature robustness via its use in the second order approximation of the loss (e.g. Petzka et al. (2020); Borovykh et al. (2019)) and is a promising quantity to relate to the margins.
As we are working with non-linear functions it is intractable to compute exact distances to the decision boundary, therefore we use a measure which is related to the linearized distance as described in Elsayed et al. (2018). Under this view, larger margins are better because the data is further from the decision boundary. Specifically, we define the margins as in Neyshabur et al. (2017): for some vector v ∈ RC and label y we let the margin of v be γ(v, y) = |vy − maxj 6=y vj |. Since we use the margin in different contexts we define the output margins γ(φ(θ, x), y) and the margins of the model output after the softmax layer γ(S(φ(θ, x)), y). Due to the intuition of margins relating to the regularity of the classification regions, they have been proven and shown to be a good generalization measure for linear networks (Langford & Shawe-Taylor, 2003) and later for neural networks (see Bartlett et al. (2017); Jiang et al. (2018; 2019)) when correctly adjusted. Due to results by Banburski et al. (2019) and Soudry et al. (2018), Poggio et al. (2019) claimed that a large part of the mystery around generalizability has been solved, since standard optimization methods are maximizing the margin instead of memorizing data.
3 THE MARGIN AND HESSIAN TRACE RELATIONSHIP
3.1 THE AFFINE CROSS-ENTROPY HESSIAN TRACE
Generally, it is difficult to derive a closed form solution of the Hessian trace due to the non-linear nature of neural networks. To gain insight into what may determine the flatness or sharpness of a solution we consider an affine prediction function for which we derive the following simple and insightful expression for the Hessian trace:
Proposition 3.1 (Affine Cross-Entropy Hessian Trace (ACEHT)). Assume an affine predictor given by φ((θ, b), x) = θx+ b where (θ, b) ∈ RC×d × RC = Θ. Then the trace of the Hessian under the cross-entropy loss assuming our predictor function is:
Tr(H(`(φ((θ, b), x), y))) = (|x|2 + 1)(1− C∑ j=1 S2j (φ(Θ, x))
= (|x|2 + 1)(1− |S(φ(Θ, x))|2).
The derivation is in Appendix C. We immediately observe that the trace of the Hessian is a product of both the size of the input and 1 − η(S(φ(θ, x))) where η(S(φ(θ, x))) = ∑C j=1 S 2 j (φ(Θ, x)), where we can view 1 − η(S(φ(θ, x))) as a confidence measure. In the visualization provided in Figure 1 we clearly see that 1 − η(S(φ(θ, x))) is only zero when the predictor predicts one class with probability 1, regardless of whether it is the correct class or not. When the model is least confident, namely when every entry is predicted with probability 1/C, then 1 − η(S(φ(θ, x))) is also highest. Hence, in the affine case with a cross-entropy loss the Hessian trace can be seen as an indication of the model confidence in its prediction. This confidence interpretation is also connected to classification margins by observing that Sy ≥ γ(S(φ(θ, x)), y) and hence (1 − ∑C j=1 S 2 j (φ(Θ, x)) ≤ 1 − S2y((φ(Θ, x))) ≤ 1 − γ2(S(φ(θ, x)), y). Therefore, if the margins are large then the region will also be flat. The intuition for this is that the error in the upper bound becomes smaller as Sy becomes larger, i.e. when the model predicts correctly and confidently. We will also provide evidence for a converse, i.e. a flat minimum has large margins, in the following experimental sections. Finally, we note that without the expression in Proposition 3.1 we would not have been able to derive the upper bound 1− γ2(S(φ(θ, x)), y) without guesswork.
3.2 EXTENSION TO THE NON-LINEAR CASE
Now we will attempt to extend the derivation of the previous section to the non-linear case. This is a challenging undertaking so we will resort to numerical evidence. To extend the results from the affine case we will consider both the ACEHT and the upper bound ACEHT (S(φ(θ, x)))) ≤ |x|(1 − γ2(S(φ(θ, x)), y)) to which we refer as the "margin bound". We will compare both quantities to the empirically derived Hessian trace. To compute the empirical Hessian trace we use the PyHessian package (Yao et al., 2019) which implements Hutchinson’s method (Bai et al., 1996; Avron & Toledo, 2011).
To compare the quantities we will compare them in terms of their distributions over the data. Specifically, let (X,Y ) ∼ D and fix θ then we compute the Pearson Correlation Coefficient (rvalue) (Lee Rodgers & Nicewander, 1988) between the random variables Tr(H(`(φ(θ,X)), Y )) and ACEHT (S(φ(θ,X))) and similarly for the margin bound. The choice of the r-value is natural because in the affine case the ACEHT and the Hessian trace are equivalent, therefore a linear relationship should be expected. Our method is also more general than just comparing some statistic, such as the average (which is generally used for flatness measures), of the above random variables.
For example, while the smallest margin over the dataset is commonly used a generalization measure (Bartlett et al., 2017; Jiang et al., 2019; Neyshabur et al., 2017), Jiang et al. (2018) showed that higher moments of the distribution are a much better predictor for generalizability as we will also see in Section 4.
Figure 2 is an examples of such a fit for an affine predictor. While the high r-value of 0.97 confirms our analytic results, we also observe that the fit is not perfect, as would be expected due to the exact relationship. The inaccuracies are due to the numerical methods used and become more pronounced the higher the Hessian trace is. To avoid outliers heavily impacting the linear regression model in the non-linear case, we will use the SciPy function LocalOutlierFactor (Breunig et al., 2000) to remove outliers before fitting the line. With this we prevent hand picking points to skew the results and will also stabilize our results.
3.2.1 EMPIRICAL EVIDENCE
We present our results using the convolutional neural network LeNet on the MNIST dataset as they are representative of what we have observed on other architectures, hyperparameters, and datsets (see Appendix B). Our results use stochastic gradient descent with a fixed learning rate and batch size to achieve an appropriate performance on the classification task. Because of the computational difficulty of computing the empirical Hessian trace for every single element in the input data, we consider 1,000 randomly selected datapoints from the training-set. To highlight the computational difficulty of using even very optimized numerical tools, such as PyHessian, we note that it takes us roughly 1,5 hours to compute the Hessian trace for the whole MNIST dataset while it only takes 5 seconds for the margins.
In Figure 3 we present the plots related to the correlation of the empirical Hessian trace to the ACEHT and margin bound over the randomly sampled datapoints. Figures 3a and 3b show that for most of training, the correlation is between 0.8 and 1. Combining Figures 3a and 3c it can be seen that the r-value increases with the model training accuracy. Furthermore, the datapoint which are incorrectly predicted do not show a correlation.
With that we confirm the intuition that indeed, flatter solution are more robust and have larger margins. While we have found flatness and margins to be highly correlated in scenarios in which others have identified flatness to be a good generalization measure (Jiang et al., 2019; Keskar et al., 2016; Chaudhari et al., 2019), it may just be that this is also an epiphenomenon of stochastic gradient descent or some other process and there may be situations in which the relationship does not hold. However, our general advice to consider margins more is not impacted by this. In the scenario where generalizability and flatness have been linked, we have also shown that margins and flatness are correlated, hence it is advantageous to use margins instead due to computational reasons or for more complete intuition. The only situation in which it is more likely that margins and flatness are not correlated is when flatness has not yet been linked to generalizability. In such a situation it may also be better to use the better understood margin measure instead of using a flatness measure to assess generalizability. In the next section we will consider the first case, where we examine a general scenario in which flatness has been used to reason about generalizability and offer a more insightful margin perspective.
4 PERSPECTIVE ON LARGE AND SMALL-BATCH METHODS
We now show how our results lead to a better understanding of phenomena which have been misleadingly attributed to flat minima. To do so, we consider the experiments which rekindled the debate around flat minima by Keskar et al. (2016), where flatness was used to explain why smallbatch methods tend to generalize better than large-batch methods. The idea was that small-batch methods converge to flatter minima due to them being able to "escape" sharp minima more easily. However, it has been shown that the minima of both methods appear to be in the same attractive basin (Sagun et al., 2017; Freeman & Bruna, 2016; Draxler et al., 2018), meaning that small-batch methods do not seem to escape any attractive basin but are merely in a different area of the same attractive basin. While the results gave credence to flatter minima generalizing better, flatter minima do not seem to provide the full picture for why large-batch methods tend to do worse and we believe that an explanation in terms of the margins is more illuminating.
4.1 EXPERIMENT SETUP
We will replicate the experiment by Keskar et al. (2016) for a fully connected network with batchnormalized layers on the MNIST dataset as described in Appendix A. We chose the large-batch size to be 4096 and the small-batch size to be 256. To have a fair comparison, we use the same seed and take 10,000 gradient steps for both methods, instead of basing the stopping time on epochs. We also used stochastic gradient descent without Momentum. With our setup we observe a similar phenomenon as Keskar et al. (2016) in Table 1. The small and large-batch method both attain the same training accuracy and comparable training loss. However, the small-batch method is at a considerably flatter minimum and generalizes better than the large-batch method. We will now show that instead of considering the flatness, it would be more insightful to consider margins to explain the difference in generalizability.
While the upper bound of ACEHT is in terms of the softmax margins, we consider the output margins in this section. The reason is that most margin based generalization measures use the output margins. Another more practical reason is that towards the end of training, the softmax margins are all very close to 1 making it difficult to visualize and observe the distribution. We also do not use a normalized version of the margins (such as Bartlett et al. (2017); Jiang et al. (2018)). Our reasoning
is that because we use the same architecture, the same dataset, and train in a similar manner the margin distributions will be comparable.
4.2 A MARGIN PERSPECTIVE ON LARGE AND SMALL-BATCH SIZES
In Figure 4 we see that the output margins and the Hessian trace are correlate as expected from Section 3. We can also roughly see that the small-batch method has fewer low margins than the largebatch methods. To emphasize this difference we consider Figure 4c where we plot the histogram and box-plot of the output margin distribution for both the large-batch and small-batch method. We also display the skewness of each, which is the third moment centered around the mean. The box-plots and the skewness confirm that the small-batch method is dominated by large margins indicating better generalizability (as discussed in Bartlett et al. (2017); Jiang et al. (2018)). The idea with a left-skewed margin distribution is that the tail with low margin datapoints is mostly compromised of outliers and will not massively affect the robustness to input perturbations. This soft-margin SVM perspective is in contrast to hard-margin SVMs where the margin is defined to be the minimum of all the distances to the decision boundary (Shalev-Shwartz & Ben-David, 2014). If a hard-margin view was adopted, then the small-batch method would be predicted to generalize worse, because it has the smallest margin as we see in Figure 4c. However, the distribution of the small-batch method is also more left skewed, which would point to this minimum being an outlier rather than being indicative of generalizability.
We now want to explain why the small-batch method generalizes well. As observed in Jastrzębski et al. (2017) a smaller batch-size is similar to a larger learning-rate, hence at every step the process will advance further than a large batch-method would. It has already been noted by Hoffer et al. (2017) that training longer leads the large-batch method to generalize just as well as the small-batch method because it had time to "catch up", even though the decrease in training loss may be barely noticeable. We have also seen that SGD converges to margin maximizing solutions by Banburski et al. (2019). Therefore, a method that is able to train or advance further, will also be closer to a margin maximizing solution. We therefore expect that large-batch methods not having had enough time to maximize margins is the driving force behind the large vs small-batch phenomenon.
5 BECOMING FLATTER WITH INCREASING MARGINS
Reparametrization problems such as shown by Dinh et al. (2017) are neither a new phenomenon nor should they necessarily discourage the design of algorithms which attempt to find flat minima. Rather they inform on what aspects of a generalization measure need to be adjusted to allow them to be used in a practical setting. For SVMs, the problem of scaling the hyperplane normal to increase margins of correctly classified points is solved by scaling the normal to make it a unit vector, transforming the functional margin into the geometric margin (Shalev-Shwartz & Ben-David, 2014). In the case of neural networks, it is also known that scaling the last layer leads to an increase in the margins for data which has been correctly predicted (Neyshabur et al., 2017). This scaling issues has been successfully addressed (see Bartlett et al. (2017); Elsayed et al. (2018); Jiang et al. (2018)). Due to the relationship to the classification margins it is natural to ask if flatness suffers from a similar problem. We confirm this with the following Proposition: Proposition 5.1. For a given neural network φ let Tα : Θ → Θ be such that for all x ∈ X and θ ∈ Θ we have φ(Tα(θ), x) = αφ(θ, x). Now assume that θ′ ∈ Θ and a datapoint (x′, y′) for which argmaxk∈{1,...,C}(φ(θ, x ′))k = y ′ then
∀s, t ∈ {1, ..., dim(Θ)} lim α→∞ ∂θs∂θt`(φ(Tα(θ ′), x′), y′) = 0. (1)
The proof is in the Appendix D. From the Proposition we immediately derive the following Corollary: Corollary 5.2. Assume that φ and θ predict every datapoint in a set D correctly then
∀s, t ∈ {1, ..., dim(Θ)} lim α→∞ ∂θs∂θtLD(Tα(θ)) = 0. (2)
Due to the Corollary, if a network has achieved full training accuracy, then the network is equivalent under the Tα transformation to an arbitrarily flat network. We note that there exists such a Tα transform for most networks. Scaling the last layer is one simple instance of such a transform. Another is that for fully connected and convolutional networks with ReLU non-linearities we observe that by the non-negative homogeneity scaling each layer also results in a valid Tα transformation. The crucial property of the Tα map is that it does not change the relative order of the model outputs and therefore, given two networks which have achieved full training accuracy we can not determine which network should generalize better based solely on the flatness of the local-geometry. We note that Banburski et al. (2019) mentioned such an issue but they did not discuss it in the context of flat minima and their arguments relied on further structure which we believe is less illuminating than our presentation and proofs.
6 CONCLUSIONS
In this paper, we have related flatness to the classification margins in a principled manner, in contrast to other works that have made a more intuitive or less quantifiable connection (Huang et al., 2019; Wang et al., 2018; Neyshabur et al., 2017; Petzka et al., 2020). Our results lead to the immediate practical recommendation of using margins instead of the computationally expensive flatness to assess generalizability. We also use our results to replace the misleading notion that small-batch methods generalize better because they "escape" sharp minima, instead arguing that small-batch methods have more time to maximize margins. We were also motivated by the flatness and margin relationship to highlight that neural networks can be made arbitrarily flat. This implies that the generalizability of two networks can not be distinguished based on flatness and hence needs to be addressed to make flatness a viable generalization measure. Based on our results, future work may assess whether flatness is an epiphenomenon of the optimization methods, because now recent work on margins (e.g. Banburski et al. (2019); Soudry et al. (2018)) can be applied to reason about flatness. Furthermore, by relating properties of the parameter space (flatness) to properties of the input space (margin) there is now an opportunity to further explore results such as by Sagun et al. (2017), where they found that the Hessian, with respect to the parameters of a neural network upon convergence, has as many positive eigenvalues as the number of classes in the dataset used. Overall, our results enable more principled discussion on how flatness may contribute to generalizability.
A APPENDIX: NETWORK ARCHITECTURE AND DATASETS
A.1 NETWORK ARCHITECTURE
We implement the convolutional neural network LeNet-5 as described in LeCun et al. (1998). Our fully connected neural network with batch normalized layers (FCNBN) is inspired by Keskar et al. (2016). It has a 784-dimensional (MNIST) or 1024-dimensional (CIFAR10) input layer followed by three batch-normalized Ioffe & Szegedy (2015) layers with ReLU non-linearities and a 10- dimensional output layer.
A.2 DATASETS
B APPENDIX: FLATNESS AND MARGIN CORRELATION
Here we present further evidence of the flatness and margin correlation discussed in Section 3. Like in Section 3 we have used appropriate learning rates and batch sizes to get a reasonable performance for the task, and have observed our results to hold for different hyperparameters. One instance where we demonstrate two different batch-sizes is for the Fully Connected Network with Batch Normalization on MNIST (Section B.1) where we present results for a batch size of 256 and 4096. We again only consider 1,000 randomly selected datapoints from the training-set due to the computational difficult of computing the Hessian trace. If the network achieves full training accuracy and there are no incorrectly classified datapoints, we set the r-value to zero.
Overall, we observe the same results as in Section 3 and a correlation between 0.8 and 1. As before, the correlation increases with increasing training accuracy for correctly predicted datapoints.
B.1 FULLY CONNECTED NETWORK WITH BATCH NORMALIZATION ON MNIST
B.1.1 BATCH SIZE: 256
(a) ACEHT to HT (b) SM Bound to HT (c) Training Accuracy
(d) Initialization (e) Step 5000 (f) Final Step (10,000)
B.1.2 BATCH SIZE: 4096
(a) ACEHT to HT (b) SM Bound to HT (c) Training Accuracy
(d) Initialization (e) Step 5000 (f) Final Step (10,000)
B.2 LENET ON CIFAR10
(a) ACEHT to HT (b) SM Bound to HT (c) Training Accuracy
(d) Initialization (e) Step 5000 (f) Final Step (10,000)
B.2.1 FULLY CONNECTED NETWORK WITH BATCH NORMALIZATION ON CIFAR10
(a) ACEHT to HT (b) SM Bound to HT (c) Training Accuracy
(d) Initialization (e) Step 5000 (f) Final Step (10,000)
C APPENDIX: DERIVATIVES OF THE CROSS-ENTROPY LOSS
C.1 GENERAL FORM
For the general form we consider the cross-entropy loss for a predictor function which is scaled by some scalar α. Specifically, we assume an arbitrary input-output pair (x, y) ∈ X × Y and will compute the partial derivatives with respect to the parameters θ of the predictor function αφ(θ, x). Since the equations can become very long we will declutter the notation by letting S = S(αφ(θ, x)), φ = φ(θ, x) and for two d-dimensional vectors x, y ∈ Rd we write 〈x, y〉 = ∑d i=1 xiyi. We also denote elementwise multiplication by and let Φ be a matrix such that (Φ)ij = φj . Lemma C.1. The first partial derivative of the cross-entropy loss with respect to an element θi is:
∂θi`(αφ(θ, x), y) = −α(∂θiφy − C∑ l=1 ∂θiφlSl(φ))
= −α(∂θiφy − 〈∂θiφ, S〉).
Proof. ∂θi`(αφ, y) = ∂θi − log(Sy)
= − 1 Sy ∂θiSy (3)
With some manipulation we compute ∂θiSy:
∂θiSy = ∂θie αφy∑C k=1 e αφk − e αφy ( ∑C k=1 e αφk)2 C∑ l=1 ∂θie αφl
= α∂θiφy eαφy∑C k=1 e αφk − α e αφy∑C k=1 e αφk C∑ l=1 eαφl∑C k=1 e αφk ∂θiφl
= Syα(∂θiφy − C∑ l=1 ∂θiφlSl). (4)
Combining Equations 3 and 4 we obtain Lemma C.1:
∂θi`(αφ, y) = −α(∂θiφy − C∑ l=1 ∂θiφlSl).
Lemma C.2. The second partial derivative of the cross-entropy loss with respect to elements θs and θt is:
∂θt∂θs`(αφ(θ, x), y) = −α(∂θt∂θsφy − 〈∂θt∂θsφ, S〉 − 〈∂θsφ, αS (∂θtφ− (∂θtΦ)S)〉).
Proof. Differentiating the first order derivative given by Lemma C.1 we obtain by the multi-variable chain rule:
∂θt∂θs`(αφ, y) = −α(∂θt∂θsφy − 〈∂θt∂θsφ, S〉 − 〈∂θsφ, ∂θtS〉). (5)
To compute ∂θtS in Equation 5 we use Equation 4 and obtain:
(∂θtS)i = ∂θtSi
= Siα(∂θtφi − 〈∂θtφ, S〉),
which after some simplification reduces to:
∂θtS(φ) = αS (∂θtφ− (∂θtΦ)S). (6)
Combining Equations 5 and 6 we obtain Lemma C.2:
∂θt∂θs`(αφ, y) = −α(∂θt∂θsφy − 〈∂θt∂θsφ, S〉 − 〈∂θsφ, αS (∂θtφ− (∂θtΦ)S)〉).
C.2 AFFINE CROSS-ENTROPY HESSIAN TRACE
We now present the proof of Proposition 3.1:
Proof. Throughout the proof we make use of Lemma C.2 and let α = 1. We also notice that any second derivative with respect to φ((θ, b), x) is zero since φ is an affine classifier.
We first consider the derivatives with respect to elements of θ where we use θi,j to denote the element in the ith row and jth column of the matrix θ. Notice that ∂θi,jφ = xjei which we write as x i j . The second order derivatives are given by:
∂θi,j∂θs,t`(φ, y) = 〈xst , S (xij − xjSi1)〉 = −xt(Ss((xij)s − xjSi)),
when computing the trace we only compute the elements on the diagonal and hence we get:
∂θi,j∂θi,j `(φ, y) = xj(Si(xj − xjSi)) = x2jSi(1− Si)
Now we consider derivatives with respect to elements of b and notice that ∂biφ = ei. For the second derivative we then get:
∂bi∂bj `(φ, y) = 〈ej , S (ei − eiSi)〉 = δijSi(1− Si).
Finally, summing up the diagonal of the total Hessian we get: Tr(H(l(Θ, x, y))) = (|x|2 + 1)(1− ∑ j S2j )
where we used the fact that ∑ i Si = 1.
D APPENDIX: SCALING PROOF
To prove Proposition 5.1 we first prove the following lemma: Lemma D.1. Assume that the argmax of φ is the correct class y and is unique then for k ∈ N, k ≥ 1 and i 6= y:
lim α→∞
αkSi(αφ) = 0 (7)
Proof. Let y be such that φy = maxk∈{1,...,C}φk. For i 6= y we have:
lim α→∞ αk eαφi∑C k=1 e αφk = lim α→∞
αk eα(φy−φi) + ∑C k=1,k 6=y e α(φk−φi)
= lim α→∞
k! (φy − φi)keα(φy−φi) + ∑C k=1,k 6=y(φk − φi)keα(φk−φi)
where the last line follows from applying L’Hopital’s rule k times. Since we assumed that y is the only y ∈ {1, ..., N} such that φy = maxk∈{1,...,N φk, we have that φk < φy for all k 6= y. Hence, as α→∞ we have eα(φk−φy) → 0. Therefore:
lim α→∞
k!eα(φi−φy) (φy − φi)k + ∑C k=1,k 6=y(φk − φi)keα(φk−φy) = 0
We are now ready to prove Proposition 5.1:
Proof. We first show that the term −α2〈∂θiφ, S (∂θtφ− 〈∂θtΦ, S〉)〉) always goes to zero. Expanding we get:
α2 ( C∑ l=1 ∂θiφl(Sl(∂θtφl − C∑ k=1 ∂θtφkSk)) )
= α2∂θiφySy(∂θtφl − C∑ k=1 ∂θtφkSk) + α 2 C∑ l=1,l 6=y ∂θiφl(Sl(∂θtφl − C∑ k=1 ∂θtφkSk))
We now show that each term in the sum goes to zero. Consider l 6= y:
|α2 C∑ k=1 ∂θiφlSl(∂θtφl − ∂θtφkSk)| ≤ α2Sl C∑ k=1 |∂θiφl(∂θtφl − ∂θtφkSk)|by the Triangle Inequality and 0 ≤ Sl ≤ 1
≤ α2SlC We let M = ∑C k=1 |∂θiφl(∂θtφl − ∂θtφkSk)| and note that M < ∞ for all 0 < α < ∞ since 0 ≤ Sk ≤ 1, ∂θiφl, ∂θtφl, ∂θtφk are constants, and it is a finite sum. By Lemma D.1 as α→∞ we have α2SlC → 0 and hence α2 ∑C k=1 ∂θiφlSl(∂θtφl − ∂θtφkSk)→ 0.
We now consider l = y: |α2∂θiφySy(∂θtφy − C∑ k=1 ∂θtφkSk)| ≤ α2|∂θiφy|Sy |∂θtφy − ∂θtφySy)|+ C∑ k=1,k 6=y |∂θtφkSk|
α2|∂θtφy − ∂θtφySy)| = |∂θt |α2T | ∑C s=1,s6=y e
αφs∑C m=1 e αφm |
= |∂θt |α2 C∑
s=1,s 6=y
Ss since Ss > 0
and using similar arguments and Lemma D.1 follows that this term is zero in the limit. It is also obvious that α2 ∑C k=1,k 6=y |∂θtφkSk| goes to zero.
We are left with showing that α(∂θt∂θiφy − 〈∂θt∂θiφ, S〉 goes to zero, this is only guaranteed when y is the true label). We will use the same method as above:
|α(∂θt∂θiφy − 〈∂θt∂θiφ, S〉)| ≤ α(|∂θt∂θiφy − ∂θt∂θiφySy|+ C∑
l=1,l 6=y
|∂θt∂θiφlSl|)
and the result follows using again Lemma D.1. | 1. What is the main contribution of the paper regarding flatness and margins in deep networks?
2. What are the strengths and weaknesses of the paper's arguments and experiments?
3. How does the reviewer assess the novelty and insightfulness of the paper's claims?
4. Are there any existing works that the paper should engage with more deeply?
5. What are some minor suggestions for improving the paper? | Review | Review
Update: Since there's no substantial author response, I'm keeping my score as it is. All the best for your future submissions!
Summary of paper
This paper argues that the flatness of a deep network at an input --- quantified by the Hessian trace of the cross-entropy loss --- roughly corresponds to the margin of the network on that point. The paper argues this by first explicitly deriving a relation between the two for a linear classifier. Then, for non-linear networks, it demonstrates this relation empirically via correlations between the two terms, on two architectures (LeNet and fully connected network) and datasets (MNIST and CIFAR10).
Then the paper uses the above insight to explain why large batch methods appear to generalize worse than small batch methods. The observation is that the margin of small batch methods are on average larger than small batch methods, and this implies that the large batch method has not trained long enough to maximize its margins and generalize well.
Finally, the paper shows that rescaling the weights of the network can lead to arbitrarily flat loss landscapes and arbitrarily large minima.
Strengths
The paper attempts to make a variety of interesting points connecting flatness and margins. The claims made are easy to understand, and are supported by a variety of illuminating experiments.
Weaknesses
The paper violates the ICLR margins. Would be great if the authors can resubmit a version of the paper with the correct margins.
The argument that "flatness is related to margins" as such I'm afraid is not novel to me and has appeared in PAC-Bayesian analyses. Here, one thinks of flatness in terms of how much random perturbation a trained deep network can withstand until a non-negligible fraction of the training data end up being misclassified. If the training data is classified by a larger margin, the classifier would withstand larger perturbations, and hence implying more flatness. For example Neyshabur et al., '18 has a nice formalization of this. Having said that, I agree that the relationship between the trace of the Hessian of cross-entropy loss to its margin is strictly speaking, novel. However, I'm not sure I see much new insight here since this insight is claimed to be a fundamental contribution of this paper.
There are many existing "normalized" flatness measures, which I feel this paper seems should have engaged with more deeply than it has. In particular, this paper argues that the trace of the Hessian or the margin of the deep network can be scaled arbitrarily without affecting the performance of the network -- therefore these quantities in themselves are bad measures of generalization. Isn't this exactly the point made in Neyshabur et al., '17,18, Bartlett et al., '17 etc., in the context of generalization bounds, and in Dinh et al., '17 in the context of Keskar et al., '17's work? The argument as to what is fundamentally new in this paper when compared to those, is lacking.
I'd also want to note that Hessian-based flatness measures capture only limited local information given that the landscape is non-convex and we've ReLU units. Would be nice to note this in the paper. (This is however not a problem with PAC-Bayesian measures).
Unfortunately, the section explaining why small-batch training generalizes better was not any more illuminating to me than what we understood from Hoffer et al 2017 and Jastrze ̨bski et al. (2017).
I'd also like to note that "Flatness is a False Friend", Diego Granziol https://arxiv.org/abs/2006.09091 makes a similar point about how training longer naturally leads to flatter minima in terms of the Hessian. Depending on how concurrent the authors think that work is, the authors may want to consider citing it.
Overall opinion
While understanding connections between flatness and generalization is an important direction, unfortunately, I feel that this paper does not provide significantly new insights into these terms. Hence, I'd recommend rejection at this point.
Clarification questions
I'm not sure I understood the architecture that is used in Fig 2. Have you trained a linear classifier?
Minor suggestions
Page 4 "Fig 2 is an examples of"
References
A PAC-Bayesian Approach to Spectrally-Normalized Margin Bounds for Neural Networks ICLR 2018 Behnam Neyshabur, Srinadh Bhojanapalli, Nathan Srebro https://arxiv.org/pdf/1706.08947.pdf
Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. arXiv preprint arXiv:1609.04836, 2016.
Yiding Jiang, Behnam Neyshabur, Hossein Mobahi, Dilip Krishnan, and Samy Bengio. Fantastic generalization measures and where to find them. arXiv preprint arXiv:1912.02178, 2019.
Laurent Dinh, Razvan Pascanu, Samy Bengio, and Yoshua Bengio. Sharp minima can generalize for deep nets. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1019–1028. JMLR. org, 2017. |
ICLR | Title
On Flat Minima, Large Margins and Generalizability
Abstract
The intuitive connection to robustness and convincing empirical evidence have made the flatness of the loss surface an attractive measure of generalizability for neural networks. Yet it suffers from various problems such as computational difficulties, reparametrization issues, and a growing concern that it may only be an epiphenomenon of optimization methods. We provide empirical evidence that under the cross-entropy loss once a neural network reaches a non-trivial training error, the flatness correlates (via Pearson Correlation Coefficient) well to the classification margins, which allows us to better reason about the concerns surrounding flatness. Our results lead to the practical recommendation that when assessing generalizability one should consider a margin-based measure instead, as it is computationally more efficient, provides further insight, and is highly correlated to flatness. We also use our insight to replace the misleading folklore that smallbatch methods generalize better because they are able to escape sharp minima. Instead we argue that large-batch methods did not have enough time to maximize margins and hence generalize worse.
1 INTRODUCTION
Understanding under which conditions a neural network will generalize from seen to unseen data is crucial, as it motivates design choices and principles which can greatly improve performance. Complexity or generalization measures are used to quantify the properties of a neural network which lead to good generalization. Currently however, established complexity measures such as VC-Dimension (Vapnik, 1998) or Rademacher Complexity (Bartlett & Mendelson, 2002) do not correlate with the generalizability of neural networks (e.g. see Zhang et al. (2016)). Hence many recommendations, such as reducing model complexity, early stopping, or adding explicit regularization are also not applicable or necessary anymore. Therefore, there is an ongoing effort to devise new complexity measures that may guide recommendations on how to obtain models that generalize well.
A popular approach is to consider the flatness of the loss surface around a neural network. Hochreiter & Schmidhuber (1997) used the minimum description length (MDL) argument of Hinton & Van Camp (1993) to claim that the flatness of a minimum can also be used as a generalization measure. Motivated by this new measure Hochreiter & Schmidhuber (1997), and more recently Chaudhari et al. (2019), developed algorithms with explicit regularization intended to converge to flat solutions. Keskar et al. (2016) then presented empirical evidence that flatness relates to improved generalizability and used it to explain the behavior of stochastic gradient descent (SGD) with large and small-batch sizes. Other works since have empirically corroborated that flatter minima generalize better (e.g. Jiang et al. (2019); Li et al. (2018); Bosman et al. (2020)).
There are however various issues that are still unresolved, which makes using flatness for constructing practical deep learning recommendations difficult. For one, flatness is computationally expensive to compute. The most common way to compute the flatness is via the Hessian, which grows quadratically in the number of parameters; this becomes too large when used with modern networks containing millions of parameters. It is also not clear to what extent flatness is a true measure of generalizability, capable of discerning which neural network will or will not generalize. Dinh et al. (2017) showed that reparametrizations affect flatness and a flat model can be made arbitrarily sharp without changing any of its generalization properties. In addition Probably Approximately Correct (PAC-Bayes) bounds that bound the generalizability in terms of the flatness are also either affected
by rescaling, impossible to evaluate or loose (Neyshabur et al., 2017; Arora et al., 2018; Petzka et al., 2020). While there have been solutions attempting to prevent issues around reparametrization (Liang et al., 2019; Tsuzuku et al., 2019), it remains to establish whether flatness is an epiphenomenon of stochastic gradient descent or other complexity measures as Achille et al. (2018) and Jastrzebski et al. (2018) are suggesting. This motivates investigating possible correlations to more well-understood measures of generalization that may help alleviate issues surrounding flat minima, while allowing flat minima to be used when appropriate.
In this paper we will demonstrate a correlation to classification margins, which are a well-understood generalization measure. Margins represent the linearized distance to the decision boundaries of the classification region (Elsayed et al., 2018). An immediate consequence of such a relationship is that to assess generalizability, we could now simply use a computationally cheap and more robust margin based complexity measure. Our contributions will demonstrate further practical implications of the relationship between margins and flatness which open doors to valuable future work such as a better understanding of why and when a model generalizes and more principled algorithm design.
• We prove that under certain conditions flatness and margins are strongly correlated. We do so by deriving the Hessian trace for the affine classifier. Based on its form, we derive an expression in terms of classification margins which we show correlates well with the Hessian trace, with increasing training accuracy for various neural network architectures. By being able relate the two complexity measures, we are now able to provide various practical recommendations, and offer different perspectives on phenomena that may not be explainable without such a view. These are shown in the following contributions.
• We use our insight to replace the misleading folklore that, unlike large-batch methods, small-batch methods are able to escape sharp minima (Keskar et al., 2016). We instead employ a margin perspective and use our empirical results along with recent results by Banburski et al. (2019) and Hoffer et al. (2017) to argue that a large batch method was unable to train long enough to maximize the margins. With our explanation, we help reframe the small and large-batch discussion and build further intuition.
• We show that once a neural network is able to correctly predict the label of every element in the training set it can be made arbitrarily flat by scaling the last layer. We are motivated by the relationship to margins which suffer from the same issue. We highlight this scaling issue because, in some instances, it may still be beneficial for algorithm design to be guided by convergence to flat regions. Hence, we need to account for scaling issues which make it difficult to use flatness to assess whether a network generalizes better than another.
Other works have made connections between flatness and well-behaved classification margins via visualizations (see Huang et al. (2019); Wang et al. (2018)), but they have not demonstrated a quantifiable relationship. Further work has used both the classification margins and flatness to construct PAC-Bayes bounds (Neyshabur et al., 2017; Arora et al., 2018), and have related flatness to increased robustness (Petzka et al., 2020; Borovykh et al., 2019) however they did not show when and to what extent these quantities are related.
We structure the paper as follows. In Section 2, we discuss both our notation and our motivation choosing the cross-entropy loss and the Hessian trace as the flatness measure and provide further background on the classification margins. In Section 3, we present our contribution showing a strong correlation between the margins and flatness by deriving. In Section 4, we combine recent results based on classification margins to offer a different perspective on the misleading folklore on why larger-batch methods generalize worse. In Section 5, we highlight that networks can be made arbitrarily flat. Lastly, we offer our thoughts and future work in the Section 6.
2 PROBLEM SETTING
We first define the basic notation that we use for a classification task. We let X represent the input space and Y = {1, ..., C} the output space where C are the number of possible classes. The network architecture is given by φ : Θ × X → R|Y| where Θ is the corresponding parameter space. We measure the performance of a parameter vector by defining some loss function ` : RC × Y → R. If we have have a joint probability distribution D relating input and output space then we would
like to minimize the expected loss LD(θ) = E(x,y)∼D[`(φ(θ, x), y)]. Since we usually only have access to some finite dataset D, we denote the empirical loss by L̃D(θ) = 1|D| ∑|D| i=1 `(φ(θ, xi), yi). If LD and L̃D are close, then we would say a model generalizes well, as we were able to train on a finite dataset and extrapolate to the true distribution. We will use the cross-entropy loss which is given by `(φ(θ, x), y) = − log(Sy(φ(θ, x))) where the softmax function S : RC → RC is given by S(a)i =
eai∑C j=1 e aj (see Goodfellow et al. (2016)).
The choice of the cross-entropy function as the loss function has a significant impact on how the flatness measure behaves. Unlike the multiclass mean squared error (MMSE), exponential type losses such as the cross-entropy loss on neural networks have been shown to include implicit regularization which leads to margin maximizing solutions for neural networks (Banburski et al., 2019). Also, various properties for flat minima which have been proven for the MMSE loss by Mulayoff & Michaeli are not applicable to the cross-entropy loss, further highlighting the fundamental differences between the loss functions. While the MMSE loss has shown some promise for many classification tasks (Hui & Belkin, 2020) the cross-entropy loss is still the loss which is most used and was primarily used for the empirical evidence around flat minima (Keskar et al., 2016; Chaudhari et al., 2019), which motivates our choice.
The qualitative description of a flat region was given by Hochreiter & Schmidhuber (1997) as “a large connected region in parameter space where the error remains approximately constant". We measure the flatness by the trace of the Hessian of the loss with respect to the parameters (in short the Hessian trace) denoted by Tr(Hθ(L̃D(θ)) (Dinh et al., 2017). Since the Hessian is symmetric, the Hessian trace is equivalent to the sum of its eigenvalues which for a fixed parameter space is proportional to the expected increase of the second order approximation of the loss around a fixed minimum θ in a random direction θ′ with θ′ ∼ N (θ, I). Since we apply flatness arguments only close to minima, we assume that all eigenvalues are positive and that the Hessian trace is a good measure of flatness Sagun et al. (2017). Even though the Hessian is only an approximation of flatness, the Hessian is often preferred as it allows us to reason about various directions in parameter space via its eigenvectors and eigenvalues (see Sagun et al. (2017); Chaudhari et al. (2019)) and alleviates the issue of infinitely long but sharp ridges making a minimum infinitely flat (Dinh et al., 2017; Freeman & Bruna, 2016). The Hessian has also been linked to feature robustness via its use in the second order approximation of the loss (e.g. Petzka et al. (2020); Borovykh et al. (2019)) and is a promising quantity to relate to the margins.
As we are working with non-linear functions it is intractable to compute exact distances to the decision boundary, therefore we use a measure which is related to the linearized distance as described in Elsayed et al. (2018). Under this view, larger margins are better because the data is further from the decision boundary. Specifically, we define the margins as in Neyshabur et al. (2017): for some vector v ∈ RC and label y we let the margin of v be γ(v, y) = |vy − maxj 6=y vj |. Since we use the margin in different contexts we define the output margins γ(φ(θ, x), y) and the margins of the model output after the softmax layer γ(S(φ(θ, x)), y). Due to the intuition of margins relating to the regularity of the classification regions, they have been proven and shown to be a good generalization measure for linear networks (Langford & Shawe-Taylor, 2003) and later for neural networks (see Bartlett et al. (2017); Jiang et al. (2018; 2019)) when correctly adjusted. Due to results by Banburski et al. (2019) and Soudry et al. (2018), Poggio et al. (2019) claimed that a large part of the mystery around generalizability has been solved, since standard optimization methods are maximizing the margin instead of memorizing data.
3 THE MARGIN AND HESSIAN TRACE RELATIONSHIP
3.1 THE AFFINE CROSS-ENTROPY HESSIAN TRACE
Generally, it is difficult to derive a closed form solution of the Hessian trace due to the non-linear nature of neural networks. To gain insight into what may determine the flatness or sharpness of a solution we consider an affine prediction function for which we derive the following simple and insightful expression for the Hessian trace:
Proposition 3.1 (Affine Cross-Entropy Hessian Trace (ACEHT)). Assume an affine predictor given by φ((θ, b), x) = θx+ b where (θ, b) ∈ RC×d × RC = Θ. Then the trace of the Hessian under the cross-entropy loss assuming our predictor function is:
Tr(H(`(φ((θ, b), x), y))) = (|x|2 + 1)(1− C∑ j=1 S2j (φ(Θ, x))
= (|x|2 + 1)(1− |S(φ(Θ, x))|2).
The derivation is in Appendix C. We immediately observe that the trace of the Hessian is a product of both the size of the input and 1 − η(S(φ(θ, x))) where η(S(φ(θ, x))) = ∑C j=1 S 2 j (φ(Θ, x)), where we can view 1 − η(S(φ(θ, x))) as a confidence measure. In the visualization provided in Figure 1 we clearly see that 1 − η(S(φ(θ, x))) is only zero when the predictor predicts one class with probability 1, regardless of whether it is the correct class or not. When the model is least confident, namely when every entry is predicted with probability 1/C, then 1 − η(S(φ(θ, x))) is also highest. Hence, in the affine case with a cross-entropy loss the Hessian trace can be seen as an indication of the model confidence in its prediction. This confidence interpretation is also connected to classification margins by observing that Sy ≥ γ(S(φ(θ, x)), y) and hence (1 − ∑C j=1 S 2 j (φ(Θ, x)) ≤ 1 − S2y((φ(Θ, x))) ≤ 1 − γ2(S(φ(θ, x)), y). Therefore, if the margins are large then the region will also be flat. The intuition for this is that the error in the upper bound becomes smaller as Sy becomes larger, i.e. when the model predicts correctly and confidently. We will also provide evidence for a converse, i.e. a flat minimum has large margins, in the following experimental sections. Finally, we note that without the expression in Proposition 3.1 we would not have been able to derive the upper bound 1− γ2(S(φ(θ, x)), y) without guesswork.
3.2 EXTENSION TO THE NON-LINEAR CASE
Now we will attempt to extend the derivation of the previous section to the non-linear case. This is a challenging undertaking so we will resort to numerical evidence. To extend the results from the affine case we will consider both the ACEHT and the upper bound ACEHT (S(φ(θ, x)))) ≤ |x|(1 − γ2(S(φ(θ, x)), y)) to which we refer as the "margin bound". We will compare both quantities to the empirically derived Hessian trace. To compute the empirical Hessian trace we use the PyHessian package (Yao et al., 2019) which implements Hutchinson’s method (Bai et al., 1996; Avron & Toledo, 2011).
To compare the quantities we will compare them in terms of their distributions over the data. Specifically, let (X,Y ) ∼ D and fix θ then we compute the Pearson Correlation Coefficient (rvalue) (Lee Rodgers & Nicewander, 1988) between the random variables Tr(H(`(φ(θ,X)), Y )) and ACEHT (S(φ(θ,X))) and similarly for the margin bound. The choice of the r-value is natural because in the affine case the ACEHT and the Hessian trace are equivalent, therefore a linear relationship should be expected. Our method is also more general than just comparing some statistic, such as the average (which is generally used for flatness measures), of the above random variables.
For example, while the smallest margin over the dataset is commonly used a generalization measure (Bartlett et al., 2017; Jiang et al., 2019; Neyshabur et al., 2017), Jiang et al. (2018) showed that higher moments of the distribution are a much better predictor for generalizability as we will also see in Section 4.
Figure 2 is an examples of such a fit for an affine predictor. While the high r-value of 0.97 confirms our analytic results, we also observe that the fit is not perfect, as would be expected due to the exact relationship. The inaccuracies are due to the numerical methods used and become more pronounced the higher the Hessian trace is. To avoid outliers heavily impacting the linear regression model in the non-linear case, we will use the SciPy function LocalOutlierFactor (Breunig et al., 2000) to remove outliers before fitting the line. With this we prevent hand picking points to skew the results and will also stabilize our results.
3.2.1 EMPIRICAL EVIDENCE
We present our results using the convolutional neural network LeNet on the MNIST dataset as they are representative of what we have observed on other architectures, hyperparameters, and datsets (see Appendix B). Our results use stochastic gradient descent with a fixed learning rate and batch size to achieve an appropriate performance on the classification task. Because of the computational difficulty of computing the empirical Hessian trace for every single element in the input data, we consider 1,000 randomly selected datapoints from the training-set. To highlight the computational difficulty of using even very optimized numerical tools, such as PyHessian, we note that it takes us roughly 1,5 hours to compute the Hessian trace for the whole MNIST dataset while it only takes 5 seconds for the margins.
In Figure 3 we present the plots related to the correlation of the empirical Hessian trace to the ACEHT and margin bound over the randomly sampled datapoints. Figures 3a and 3b show that for most of training, the correlation is between 0.8 and 1. Combining Figures 3a and 3c it can be seen that the r-value increases with the model training accuracy. Furthermore, the datapoint which are incorrectly predicted do not show a correlation.
With that we confirm the intuition that indeed, flatter solution are more robust and have larger margins. While we have found flatness and margins to be highly correlated in scenarios in which others have identified flatness to be a good generalization measure (Jiang et al., 2019; Keskar et al., 2016; Chaudhari et al., 2019), it may just be that this is also an epiphenomenon of stochastic gradient descent or some other process and there may be situations in which the relationship does not hold. However, our general advice to consider margins more is not impacted by this. In the scenario where generalizability and flatness have been linked, we have also shown that margins and flatness are correlated, hence it is advantageous to use margins instead due to computational reasons or for more complete intuition. The only situation in which it is more likely that margins and flatness are not correlated is when flatness has not yet been linked to generalizability. In such a situation it may also be better to use the better understood margin measure instead of using a flatness measure to assess generalizability. In the next section we will consider the first case, where we examine a general scenario in which flatness has been used to reason about generalizability and offer a more insightful margin perspective.
4 PERSPECTIVE ON LARGE AND SMALL-BATCH METHODS
We now show how our results lead to a better understanding of phenomena which have been misleadingly attributed to flat minima. To do so, we consider the experiments which rekindled the debate around flat minima by Keskar et al. (2016), where flatness was used to explain why smallbatch methods tend to generalize better than large-batch methods. The idea was that small-batch methods converge to flatter minima due to them being able to "escape" sharp minima more easily. However, it has been shown that the minima of both methods appear to be in the same attractive basin (Sagun et al., 2017; Freeman & Bruna, 2016; Draxler et al., 2018), meaning that small-batch methods do not seem to escape any attractive basin but are merely in a different area of the same attractive basin. While the results gave credence to flatter minima generalizing better, flatter minima do not seem to provide the full picture for why large-batch methods tend to do worse and we believe that an explanation in terms of the margins is more illuminating.
4.1 EXPERIMENT SETUP
We will replicate the experiment by Keskar et al. (2016) for a fully connected network with batchnormalized layers on the MNIST dataset as described in Appendix A. We chose the large-batch size to be 4096 and the small-batch size to be 256. To have a fair comparison, we use the same seed and take 10,000 gradient steps for both methods, instead of basing the stopping time on epochs. We also used stochastic gradient descent without Momentum. With our setup we observe a similar phenomenon as Keskar et al. (2016) in Table 1. The small and large-batch method both attain the same training accuracy and comparable training loss. However, the small-batch method is at a considerably flatter minimum and generalizes better than the large-batch method. We will now show that instead of considering the flatness, it would be more insightful to consider margins to explain the difference in generalizability.
While the upper bound of ACEHT is in terms of the softmax margins, we consider the output margins in this section. The reason is that most margin based generalization measures use the output margins. Another more practical reason is that towards the end of training, the softmax margins are all very close to 1 making it difficult to visualize and observe the distribution. We also do not use a normalized version of the margins (such as Bartlett et al. (2017); Jiang et al. (2018)). Our reasoning
is that because we use the same architecture, the same dataset, and train in a similar manner the margin distributions will be comparable.
4.2 A MARGIN PERSPECTIVE ON LARGE AND SMALL-BATCH SIZES
In Figure 4 we see that the output margins and the Hessian trace are correlate as expected from Section 3. We can also roughly see that the small-batch method has fewer low margins than the largebatch methods. To emphasize this difference we consider Figure 4c where we plot the histogram and box-plot of the output margin distribution for both the large-batch and small-batch method. We also display the skewness of each, which is the third moment centered around the mean. The box-plots and the skewness confirm that the small-batch method is dominated by large margins indicating better generalizability (as discussed in Bartlett et al. (2017); Jiang et al. (2018)). The idea with a left-skewed margin distribution is that the tail with low margin datapoints is mostly compromised of outliers and will not massively affect the robustness to input perturbations. This soft-margin SVM perspective is in contrast to hard-margin SVMs where the margin is defined to be the minimum of all the distances to the decision boundary (Shalev-Shwartz & Ben-David, 2014). If a hard-margin view was adopted, then the small-batch method would be predicted to generalize worse, because it has the smallest margin as we see in Figure 4c. However, the distribution of the small-batch method is also more left skewed, which would point to this minimum being an outlier rather than being indicative of generalizability.
We now want to explain why the small-batch method generalizes well. As observed in Jastrzębski et al. (2017) a smaller batch-size is similar to a larger learning-rate, hence at every step the process will advance further than a large batch-method would. It has already been noted by Hoffer et al. (2017) that training longer leads the large-batch method to generalize just as well as the small-batch method because it had time to "catch up", even though the decrease in training loss may be barely noticeable. We have also seen that SGD converges to margin maximizing solutions by Banburski et al. (2019). Therefore, a method that is able to train or advance further, will also be closer to a margin maximizing solution. We therefore expect that large-batch methods not having had enough time to maximize margins is the driving force behind the large vs small-batch phenomenon.
5 BECOMING FLATTER WITH INCREASING MARGINS
Reparametrization problems such as shown by Dinh et al. (2017) are neither a new phenomenon nor should they necessarily discourage the design of algorithms which attempt to find flat minima. Rather they inform on what aspects of a generalization measure need to be adjusted to allow them to be used in a practical setting. For SVMs, the problem of scaling the hyperplane normal to increase margins of correctly classified points is solved by scaling the normal to make it a unit vector, transforming the functional margin into the geometric margin (Shalev-Shwartz & Ben-David, 2014). In the case of neural networks, it is also known that scaling the last layer leads to an increase in the margins for data which has been correctly predicted (Neyshabur et al., 2017). This scaling issues has been successfully addressed (see Bartlett et al. (2017); Elsayed et al. (2018); Jiang et al. (2018)). Due to the relationship to the classification margins it is natural to ask if flatness suffers from a similar problem. We confirm this with the following Proposition: Proposition 5.1. For a given neural network φ let Tα : Θ → Θ be such that for all x ∈ X and θ ∈ Θ we have φ(Tα(θ), x) = αφ(θ, x). Now assume that θ′ ∈ Θ and a datapoint (x′, y′) for which argmaxk∈{1,...,C}(φ(θ, x ′))k = y ′ then
∀s, t ∈ {1, ..., dim(Θ)} lim α→∞ ∂θs∂θt`(φ(Tα(θ ′), x′), y′) = 0. (1)
The proof is in the Appendix D. From the Proposition we immediately derive the following Corollary: Corollary 5.2. Assume that φ and θ predict every datapoint in a set D correctly then
∀s, t ∈ {1, ..., dim(Θ)} lim α→∞ ∂θs∂θtLD(Tα(θ)) = 0. (2)
Due to the Corollary, if a network has achieved full training accuracy, then the network is equivalent under the Tα transformation to an arbitrarily flat network. We note that there exists such a Tα transform for most networks. Scaling the last layer is one simple instance of such a transform. Another is that for fully connected and convolutional networks with ReLU non-linearities we observe that by the non-negative homogeneity scaling each layer also results in a valid Tα transformation. The crucial property of the Tα map is that it does not change the relative order of the model outputs and therefore, given two networks which have achieved full training accuracy we can not determine which network should generalize better based solely on the flatness of the local-geometry. We note that Banburski et al. (2019) mentioned such an issue but they did not discuss it in the context of flat minima and their arguments relied on further structure which we believe is less illuminating than our presentation and proofs.
6 CONCLUSIONS
In this paper, we have related flatness to the classification margins in a principled manner, in contrast to other works that have made a more intuitive or less quantifiable connection (Huang et al., 2019; Wang et al., 2018; Neyshabur et al., 2017; Petzka et al., 2020). Our results lead to the immediate practical recommendation of using margins instead of the computationally expensive flatness to assess generalizability. We also use our results to replace the misleading notion that small-batch methods generalize better because they "escape" sharp minima, instead arguing that small-batch methods have more time to maximize margins. We were also motivated by the flatness and margin relationship to highlight that neural networks can be made arbitrarily flat. This implies that the generalizability of two networks can not be distinguished based on flatness and hence needs to be addressed to make flatness a viable generalization measure. Based on our results, future work may assess whether flatness is an epiphenomenon of the optimization methods, because now recent work on margins (e.g. Banburski et al. (2019); Soudry et al. (2018)) can be applied to reason about flatness. Furthermore, by relating properties of the parameter space (flatness) to properties of the input space (margin) there is now an opportunity to further explore results such as by Sagun et al. (2017), where they found that the Hessian, with respect to the parameters of a neural network upon convergence, has as many positive eigenvalues as the number of classes in the dataset used. Overall, our results enable more principled discussion on how flatness may contribute to generalizability.
A APPENDIX: NETWORK ARCHITECTURE AND DATASETS
A.1 NETWORK ARCHITECTURE
We implement the convolutional neural network LeNet-5 as described in LeCun et al. (1998). Our fully connected neural network with batch normalized layers (FCNBN) is inspired by Keskar et al. (2016). It has a 784-dimensional (MNIST) or 1024-dimensional (CIFAR10) input layer followed by three batch-normalized Ioffe & Szegedy (2015) layers with ReLU non-linearities and a 10- dimensional output layer.
A.2 DATASETS
B APPENDIX: FLATNESS AND MARGIN CORRELATION
Here we present further evidence of the flatness and margin correlation discussed in Section 3. Like in Section 3 we have used appropriate learning rates and batch sizes to get a reasonable performance for the task, and have observed our results to hold for different hyperparameters. One instance where we demonstrate two different batch-sizes is for the Fully Connected Network with Batch Normalization on MNIST (Section B.1) where we present results for a batch size of 256 and 4096. We again only consider 1,000 randomly selected datapoints from the training-set due to the computational difficult of computing the Hessian trace. If the network achieves full training accuracy and there are no incorrectly classified datapoints, we set the r-value to zero.
Overall, we observe the same results as in Section 3 and a correlation between 0.8 and 1. As before, the correlation increases with increasing training accuracy for correctly predicted datapoints.
B.1 FULLY CONNECTED NETWORK WITH BATCH NORMALIZATION ON MNIST
B.1.1 BATCH SIZE: 256
(a) ACEHT to HT (b) SM Bound to HT (c) Training Accuracy
(d) Initialization (e) Step 5000 (f) Final Step (10,000)
B.1.2 BATCH SIZE: 4096
(a) ACEHT to HT (b) SM Bound to HT (c) Training Accuracy
(d) Initialization (e) Step 5000 (f) Final Step (10,000)
B.2 LENET ON CIFAR10
(a) ACEHT to HT (b) SM Bound to HT (c) Training Accuracy
(d) Initialization (e) Step 5000 (f) Final Step (10,000)
B.2.1 FULLY CONNECTED NETWORK WITH BATCH NORMALIZATION ON CIFAR10
(a) ACEHT to HT (b) SM Bound to HT (c) Training Accuracy
(d) Initialization (e) Step 5000 (f) Final Step (10,000)
C APPENDIX: DERIVATIVES OF THE CROSS-ENTROPY LOSS
C.1 GENERAL FORM
For the general form we consider the cross-entropy loss for a predictor function which is scaled by some scalar α. Specifically, we assume an arbitrary input-output pair (x, y) ∈ X × Y and will compute the partial derivatives with respect to the parameters θ of the predictor function αφ(θ, x). Since the equations can become very long we will declutter the notation by letting S = S(αφ(θ, x)), φ = φ(θ, x) and for two d-dimensional vectors x, y ∈ Rd we write 〈x, y〉 = ∑d i=1 xiyi. We also denote elementwise multiplication by and let Φ be a matrix such that (Φ)ij = φj . Lemma C.1. The first partial derivative of the cross-entropy loss with respect to an element θi is:
∂θi`(αφ(θ, x), y) = −α(∂θiφy − C∑ l=1 ∂θiφlSl(φ))
= −α(∂θiφy − 〈∂θiφ, S〉).
Proof. ∂θi`(αφ, y) = ∂θi − log(Sy)
= − 1 Sy ∂θiSy (3)
With some manipulation we compute ∂θiSy:
∂θiSy = ∂θie αφy∑C k=1 e αφk − e αφy ( ∑C k=1 e αφk)2 C∑ l=1 ∂θie αφl
= α∂θiφy eαφy∑C k=1 e αφk − α e αφy∑C k=1 e αφk C∑ l=1 eαφl∑C k=1 e αφk ∂θiφl
= Syα(∂θiφy − C∑ l=1 ∂θiφlSl). (4)
Combining Equations 3 and 4 we obtain Lemma C.1:
∂θi`(αφ, y) = −α(∂θiφy − C∑ l=1 ∂θiφlSl).
Lemma C.2. The second partial derivative of the cross-entropy loss with respect to elements θs and θt is:
∂θt∂θs`(αφ(θ, x), y) = −α(∂θt∂θsφy − 〈∂θt∂θsφ, S〉 − 〈∂θsφ, αS (∂θtφ− (∂θtΦ)S)〉).
Proof. Differentiating the first order derivative given by Lemma C.1 we obtain by the multi-variable chain rule:
∂θt∂θs`(αφ, y) = −α(∂θt∂θsφy − 〈∂θt∂θsφ, S〉 − 〈∂θsφ, ∂θtS〉). (5)
To compute ∂θtS in Equation 5 we use Equation 4 and obtain:
(∂θtS)i = ∂θtSi
= Siα(∂θtφi − 〈∂θtφ, S〉),
which after some simplification reduces to:
∂θtS(φ) = αS (∂θtφ− (∂θtΦ)S). (6)
Combining Equations 5 and 6 we obtain Lemma C.2:
∂θt∂θs`(αφ, y) = −α(∂θt∂θsφy − 〈∂θt∂θsφ, S〉 − 〈∂θsφ, αS (∂θtφ− (∂θtΦ)S)〉).
C.2 AFFINE CROSS-ENTROPY HESSIAN TRACE
We now present the proof of Proposition 3.1:
Proof. Throughout the proof we make use of Lemma C.2 and let α = 1. We also notice that any second derivative with respect to φ((θ, b), x) is zero since φ is an affine classifier.
We first consider the derivatives with respect to elements of θ where we use θi,j to denote the element in the ith row and jth column of the matrix θ. Notice that ∂θi,jφ = xjei which we write as x i j . The second order derivatives are given by:
∂θi,j∂θs,t`(φ, y) = 〈xst , S (xij − xjSi1)〉 = −xt(Ss((xij)s − xjSi)),
when computing the trace we only compute the elements on the diagonal and hence we get:
∂θi,j∂θi,j `(φ, y) = xj(Si(xj − xjSi)) = x2jSi(1− Si)
Now we consider derivatives with respect to elements of b and notice that ∂biφ = ei. For the second derivative we then get:
∂bi∂bj `(φ, y) = 〈ej , S (ei − eiSi)〉 = δijSi(1− Si).
Finally, summing up the diagonal of the total Hessian we get: Tr(H(l(Θ, x, y))) = (|x|2 + 1)(1− ∑ j S2j )
where we used the fact that ∑ i Si = 1.
D APPENDIX: SCALING PROOF
To prove Proposition 5.1 we first prove the following lemma: Lemma D.1. Assume that the argmax of φ is the correct class y and is unique then for k ∈ N, k ≥ 1 and i 6= y:
lim α→∞
αkSi(αφ) = 0 (7)
Proof. Let y be such that φy = maxk∈{1,...,C}φk. For i 6= y we have:
lim α→∞ αk eαφi∑C k=1 e αφk = lim α→∞
αk eα(φy−φi) + ∑C k=1,k 6=y e α(φk−φi)
= lim α→∞
k! (φy − φi)keα(φy−φi) + ∑C k=1,k 6=y(φk − φi)keα(φk−φi)
where the last line follows from applying L’Hopital’s rule k times. Since we assumed that y is the only y ∈ {1, ..., N} such that φy = maxk∈{1,...,N φk, we have that φk < φy for all k 6= y. Hence, as α→∞ we have eα(φk−φy) → 0. Therefore:
lim α→∞
k!eα(φi−φy) (φy − φi)k + ∑C k=1,k 6=y(φk − φi)keα(φk−φy) = 0
We are now ready to prove Proposition 5.1:
Proof. We first show that the term −α2〈∂θiφ, S (∂θtφ− 〈∂θtΦ, S〉)〉) always goes to zero. Expanding we get:
α2 ( C∑ l=1 ∂θiφl(Sl(∂θtφl − C∑ k=1 ∂θtφkSk)) )
= α2∂θiφySy(∂θtφl − C∑ k=1 ∂θtφkSk) + α 2 C∑ l=1,l 6=y ∂θiφl(Sl(∂θtφl − C∑ k=1 ∂θtφkSk))
We now show that each term in the sum goes to zero. Consider l 6= y:
|α2 C∑ k=1 ∂θiφlSl(∂θtφl − ∂θtφkSk)| ≤ α2Sl C∑ k=1 |∂θiφl(∂θtφl − ∂θtφkSk)|by the Triangle Inequality and 0 ≤ Sl ≤ 1
≤ α2SlC We let M = ∑C k=1 |∂θiφl(∂θtφl − ∂θtφkSk)| and note that M < ∞ for all 0 < α < ∞ since 0 ≤ Sk ≤ 1, ∂θiφl, ∂θtφl, ∂θtφk are constants, and it is a finite sum. By Lemma D.1 as α→∞ we have α2SlC → 0 and hence α2 ∑C k=1 ∂θiφlSl(∂θtφl − ∂θtφkSk)→ 0.
We now consider l = y: |α2∂θiφySy(∂θtφy − C∑ k=1 ∂θtφkSk)| ≤ α2|∂θiφy|Sy |∂θtφy − ∂θtφySy)|+ C∑ k=1,k 6=y |∂θtφkSk|
α2|∂θtφy − ∂θtφySy)| = |∂θt |α2T | ∑C s=1,s6=y e
αφs∑C m=1 e αφm |
= |∂θt |α2 C∑
s=1,s 6=y
Ss since Ss > 0
and using similar arguments and Lemma D.1 follows that this term is zero in the limit. It is also obvious that α2 ∑C k=1,k 6=y |∂θtφkSk| goes to zero.
We are left with showing that α(∂θt∂θiφy − 〈∂θt∂θiφ, S〉 goes to zero, this is only guaranteed when y is the true label). We will use the same method as above:
|α(∂θt∂θiφy − 〈∂θt∂θiφ, S〉)| ≤ α(|∂θt∂θiφy − ∂θt∂θiφySy|+ C∑
l=1,l 6=y
|∂θt∂θiφlSl|)
and the result follows using again Lemma D.1. | 1. What is the main contribution of the paper regarding generalization in neural networks?
2. What are the strengths and weaknesses of the proposed approach in relating flatness and output margins?
3. Do you have any concerns or suggestions regarding the empirical evidence presented in the paper?
4. How does the paper's claim about small-batch SGD compared to large-batch SGD training generalization correlate with other works in this area?
5. What are your thoughts on the paper's argument for why small-batch training should optimize the true risk better than large-batch training?
6. Are there any limitations or potential biases in the paper's analysis of the correlation between output margins and Hessian trace?
7. How does the paper's proposal for estimating the Hessian trace using input norm and softmax output compare to existing methods such as reparameterization-invariant flatness measures?
8. What is the significance of the paper's finding that neural networks can be scaled to be arbitrary flat, and how does it relate to the output margin?
9. What are some potential avenues for future research related to the paper's topics?
10. Could the paper provide more insight into why the scaling issue itself is well-known, and how their proposed measure avoids this issue? | Review | Review
The paper presents empirical evidence that the output margin - as a measure of the confidence of a multiclass predictor - is strongly correlated to the Hessian trace when using cross-entropy loss with softmax. Moreover, the paper presents a method for estimating the Hessian trace using the input norm and softmax output. This estimation is inspired by linear classifiers and shows a strong correlation with the Hessian trace.
I think the paper presents an interesting take on generalization by relating flatness and output margins. I would argue that it might be even more fruitful to also investigate input margins, or margins in some representation (i.e., some layer of the network). Nonetheless, the direction is interesting and the paper is well-written and pleasant to read. However, the analysis is a bit limited and the claimed contributions of this paper are not well substantiated.
The paper claims that it proves that flatness and output margins are strongly correlated. The paper proves this relation for linear models with cross-entropy loss and softmax, but only presents empirical indication for neural networks. While those experiments are interesting, they are very limited (only MNIST). Thus, they do not fully substantiate the claim.
Moreover, the paper claims to show that small-batch SGD is not better in escaping local minima than large-batch SGD. To that end, the paper performs experiments in which small-batch SGD indeed generalizes better than large-batch SGD. The paper explains this by the distribution of output margins. This alone does not support the claim. The paper than argues that Hoffer et al., 2017 have shown that large-batch training generalizes just as well (given a similar number of parameter updates) and from that concludes their claim is true. There are two issues with this argument: (i) it remains unclear whether the empirical results of Hoffer et al., 2017 show that large-batch training always generalizes as well as small-batch training given the same number of weight updates, since Hoffer already found that the generalization is also connected to their batch normalization technique and other works (e.g., Goyal et al, 2017 [1]) report that large-batch training generalizes not as well as small-batch training. (ii) even if Hoffer et al., 2017 would support the claim that large-batch training generalizes as well as small-batch training, I don't see why the arguments made by the authors add to this. To give compelling empirical evidence the paper could, for example, show that the distribution of margins is different for small and large batch training over a wide selection of different datasets and network architectures. This would not prove the claim, but make it more plausible. In general, I am unsure whether this claim can be true for arbitrary learning problems. The argument made for small-batch training is that SGD in expectation optimizes the risk, while gradient descent optimizes the empirical error (see Chapter 14.5.1 in Shalev-Shwartz and Ben-David, 2014). Thus, using a small-batch size (and thus performing something closer to SGD) should optimize the true risk and thus generalize better than using a large-batch (thus being closer to GD) which should overfit. Of course, it is unclear how this observation on SGD translates to neural networks which often are trained in the interpolation regime. However, this paper in its current form does not add insights to this discussion.
Lastly, the paper claims to show that neural networks can be scaled to be arbitrary flat and that this issue relates to the output margin. The scaling issue itself is well-known (Dinh et al., 2917) and reparameterization-invariant flatness measures have been proposed (Petzka, et al., 2019, 2020, Tsuzuku, et al. 2019) that avoid this issue. The difference of those measures to the one proposed in this paper is not discussed. Thus, the contribution of this paper remain unclear.
Therefore, I think this paper is not ready for publication, yet.
Detailed comments:
it seems that the major advantage of using the margin measure instead of the Hessian trace is computational complexity. If this is indeed the selling point, please add an analysis of the computational complexity and a runtime analysis (e.g., when regularizing using the measure). In that case, please also compare to flatness measures of only one layer of the network (e.g., Petzka et al., 2020).
it seems that the margin measure does not suffer from the reparameterization curse which is a huge plus for this method. I suggest adding a short section that proves this.
in Sec. 4.2 the paper relates its results on output margins to input margins (mentioning SVMs). It is unclear to me how output margins relate to input margins. I do think that this relation is interesting, but the brief discussion in that section is not compelling.
[1] Goyal, Priya, et al. "Accurate, large minibatch sgd: Training imagenet in 1 hour." arXiv preprint arXiv:1706.02677 (2017).
------- After Discussion -------- The authors agreed that the paper is not ready for publication, yet. Thus I keep my original score. |
ICLR | Title
Efficient Evaluation of Adversarial Robustness for Deep Hashing based Retrieval
Abstract
Deep hashing has been extensively applied to massive image retrieval due to its efficiency and effectiveness. Recently, several adversarial attacks have been presented to reveal the vulnerability of deep hashing models against adversarial examples. However, existing attack methods suffer in degraded performance or inefficiency because they underutilize the semantic relations between original samples or spend a lot of time learning from these samples. In this paper, we propose a novel Pharos-guided Attack, dubbed PgA, to evaluate the adversarial robustness of deep hashing networks efficiently. Specifically, we design pharos code to represent the semantics of the benign image, which preserves the similarity with semantically related samples and dissimilarity with irrelevant examples. It is proven that we can quickly calculate the pharos code via a simple math formula rather than time-consuming iterative procedures. Thus, PgA can directly conduct a reliable and efficient attack on deep hashing-based retrieval by maximizing the similarity between the hash code of the adversarial example and the pharos code. Extensive experiments on the benchmark datasets verify that the proposed algorithm outperforms the prior state-of-the-arts in both attack strength and speed.
1 INTRODUCTION
It is challenging to rapidly and effectively search for the required information from vast collections in the current era of big data. Learning to hash (hashing) (Wang et al., 2018) has attracted much attention in large-scale image retrieval due to its exceptional benefits in efficient XOR operation and low storage cost by mapping high-dimensional data to compact binary codes. Particularly, deep hashing (Xia et al., 2014; Li et al., 2016; Cao et al., 2017) that learns nonlinear hash functions with deep neural networks (DNNs) has become a predominant image search technique since it delivers better retrieval accuracy than conventional hashing.
Recent works (Yang et al., 2020; Bai et al., 2020; Wang et al., 2021b;a; Zhang et al., 2021; Xiao & Wang, 2021; Lu et al., 2021) have revealed that deep hashing models are susceptible to adversarial examples. Although these imperceptible samples are crafted by adding small perturbations to original samples, they are sufficient to deceive models into making inaccurate predictions. There is no doubt that such malicious attacks pose grave security threats to image retrieval systems based on deep hashing. In a deep hashing-based face recognition system, for instance, adversarial examples can mislead the system into matching the faces of specific individuals in the database, infiltrating the system effectively. Consequently, there is significant demand for research into these security concerns in deep hashing-based retrieval.
A Few studies (Yang et al., 2020; Bai et al., 2020; Wang et al., 2021b;a; Lu et al., 2021) have been con-
ducted on adversarial attacks and adversarial defenses in deep hashing-based retrieval at present. Existing attack techniques have been shown to be effective, but they are neither efficient nor reliable in evaluating the adversarial robustness (i.e., defense performance) of deep hashing networks. Firstly, these methods (Yang et al., 2020; Bai et al., 2020; Lu et al., 2021) suffer in limited attack performances because they do not fully leverage the semantic relevance between available samples. For instance, SDHA (Lu et al., 2021) only reduces the similarity of adversarial samples with their semantically relevant images, ignoring more irrelevant ones. Secondly, though some hashing attack methods simultaneously consider similar and dissimilar pairs, they use time-consuming neural networks to learn discriminative semantic representations from these pairs for the precise semantic attack, e.g., ProS-GAN (Wang et al., 2021b) and THA (Wang et al., 2021a). In this paper, we focus on improving the deficiencies of previous hashing attacks in both effectiveness and efficiency, as shown in Fig. 1. Furthermore, strong adversarial attack methods with high efficiency can provide excellent benchmarks of model robustness and facilitate the development of adversarial defense strategies in deep hashing-based retrieval.
In this study, we propose Pharos-guided Attack (PgA) for efficient adversarial robustness evaluation of deep hashing networks. The core idea is to quickly compute the pharos code, which reflects the semantics of the original image, and then to use the pharos code to direct the generation of the potent adversarial sample. Specifically, we first design an optimal hash code (namely pharos code) as the discriminative representative of the benign image semantics, which maintains both the similarities to semantically relevant samples and the dissimilarities to irrelevant samples. Benefiting from the binary property of hash codes, we prove that the proposed Pharos Generation Method (PGM) can directly calculate the pharos code through a simple mathematical formula (refer to Appendix A.1 for proof). Thus, the pharos codes of the input data are calculated immediately before the adversarial attack. Subsequently, based on the pharos code, it is feasible to carry out an efficient adversarial hashing attack by maximizing the Hamming distance between the hash code of the adversarial example and the pharos code. Due to the excellence of the pharos codes, our attack manner can considerably enhance the efficiency and effectiveness of adversarial robustness verification, as shown in Fig. 1. In summary, our main contributions are as follows:
• We create the pharos code as the precise semantic representative of the original image content to aid in the construction of the adversarial attack framework for deep hashingbased retrieval. It should be emphasized that our proven mathematical formula in PGM can generate the pharos code instantly.
• A simple pharos-guided attack algorithm is provided, i.e., PgA, which is an efficient and reliable method to evaluate the adversarial robustness of deep hashing networks.
• Extensive experiments demonstrate that PgA can be applied to deep hashing frameworks and achieves state-of-the-art attack performance with high efficiency.
2 RELATED WORK
In this section, we briefly review most relevant works in deep hashing-based image retrieval, adversarial attacks and adversarial training for defense.
Deep Hashing based Image Retrieval. With the remarkable success of deep learning on many vision tasks, deep hashing methods have been well developed for large-scale image retrieval, yielding superior performances than the traditional hashing methods based on hand-crafted features. The pioneering CNNH (Xia et al., 2014) adopts a two-stage strategy, i.e., hash code generation of training data and hash function construction with DNN. Recently, deep hashing methods (Lai et al., 2015; Zhu et al., 2016; Li et al., 2016; Liu et al., 2016; Li et al., 2017; Cao et al., 2017; Jiang & Li, 2018; Cao et al., 2018; Su et al., 2018; Wang et al., 2021c; Doan et al., 2022) focus on joint feature learning and hash code encoding into an end-to-end DNN for the better quality of hash codes. A notable work is DPSH (Li et al., 2016), which simultaneously learns the visual features of data points and preserves their semantic similarity with a pairwise-similarity loss. To alleviate data imbalance between positive and negative pairs, HashNet (Cao et al., 2017) adopts a weighted strategy in pairwise loss functions. Different from the pairwise similarity learning, CSQ (Yuan et al., 2020) can generate high-quality hash codes by enforcing them close to pre-defined hash centers.
Adversarial Attack. In image classification, numerous adversarial attack methods (Szegedy et al., 2014; Goodfellow et al., 2015; Kurakin et al., 2017; Moosavi-Dezfooli et al., 2016; Madry et al., 2017; Carlini & Wagner, 2017; Dong et al., 2018; Papernot et al., 2017; Chen et al., 2017; Ilyas et al., 2018) have been developed to fool the well-trained classifiers by constructing adversarial examples, since the intriguing properties (Szegedy et al., 2014; Biggio et al., 2013) of adversarial samples are discovered. For example, FGSM (Goodfellow et al., 2015) crafts adversarial samples by maximizing the loss along the gradient direction with a large step. As the multi-step variant of FGSM, I-FGSM (Kurakin et al., 2017) and PGD (Madry et al., 2017) iteratively update perturbations with small steps for better attack performance.
Recently, researchers have extended adversarial attacks to deep hashing-based image retrieval (Yang et al., 2020; Bai et al., 2020; Wang et al., 2021b;a; Lu et al., 2021; Zhang et al., 2021). Existing adversarial attack methods for deep hashing can be organized into two categories: non-targeted attack and targeted attack. For a non-targeted attack in hashing-based retrieval, its goal is to generate adversarial examples that can confuse the hashing model to retrieve results irrelevant to the original image (Bai et al., 2020; Wang et al., 2021b). Achieving the non-targeted attack by minimizing the hash code similarity between the adversarial example and the original sample, Yang et al. (Yang et al., 2020) proposed HAG, the first adversarial attack method on deep hashing. SDHA (Lu et al., 2021) generates more effective adversarial queries due to staying away from the relevant images of the benign sample, while HAG only takes the original image into consideration. As for the targeted attack, it aims to construct adversarial examples whose retrieved images are semantically relevant to the given target label (Bai et al., 2020; Wang et al., 2021b). To achieve the targeted attack, P2P and DHTA (Bai et al., 2020) obtain the anchor code as the representative of the target label to direct the generation of the adversarial sample. Subsequently, Wang et al. (Wang et al., 2021a) defined the prototype code as the target code to reach a better targeted attack, which is called THA in this paper. ProS-GAN (Wang et al., 2021b) designs a generative framework for efficient targeted hashing attack under the test phase. Different from the above white-box scenarios, Xiao et al. (Xiao & Wang, 2021) proposed the targeted black-box attack NAG by enhancing the transferability of adversarial examples.
Unlike the prior work (Bai et al., 2020) where the anchor code is obtained by a few instances with the same semantics, we proposed the pharos code, which preserves the semantic similarity with relevant samples and dissimilarity with irrelevant samples. Moreover, we use the proven mathematical formula (i.e., PGM) to instantly calculate the pharos code before the adversarial attack, instead of learning the prototype codes (Wang et al., 2021b;a) through time-consuming neural networks as ProS-GAN and THA do. Hence, our pharos code is more suited for efficient adversarial robustness evaluation of deep hashing models.
Adversarial Training. Adversarial training (Goodfellow et al., 2015; Madry et al., 2017) aims to augment the training data with generated adversarial examples, which is the most robust training strategy against various adversarial attacks. Thus, modifications (Zhang et al., 2019; Wong et al., 2020; Pang et al., 2021) and applications (Li et al., 2021; Utrera et al., 2021) of adversarial training have emerged to improve the robustness and generalization of DNNs. For deep hashing-based retrieval, (Wang et al., 2021a) proposed the first effective adversarial training algorithm based on the targeted attack (dubbed ATRDH here) by narrowing the semantic gap between the adversarial samples and the original samples in the Hamming space.
3 METHOD
3.1 PRELIMINARIES
We consider that an attacked hashing model F learns from a training set of N data points O = {(xi,yi)}Ni=1, where xi indicates i-th image, and yi = [yi1, yi2, ..., yiC ] ∈ {0, 1}C denotes a label vector of xi. C indicates the total number of classes in the dataset. yij = 1 means that xi belongs to the j-th class. If xi and xj share at least one common label, they are semantically similar, i.e., xj is the positive sample of xi. Otherwise, they are semantically dissimilar and xj is the negative sample of xi.
Deep hashing aims at employing DNNs to transform high dimensional data into compact binary codes and simultaneously preserves their semantic similarities. For the given hashing model F , the
hash code bi of the instance xi is generated as:
bi = F (xi) = sign(hi) = sign(fθ(xi)), s.t. bi ∈ {−1, 1}K , (1)
where K represents the hash code length, and f(·) with parameter θ is a DNN to approximate hash function F (·). The final binary code bi is obtained by applying the sign(·) on the output hi of fθ(xi). Typically, f(·) is implemented by a convolutional neural network (CNN) and adopts the tanh activation to simulate sign function at the output layer.
3.2 THE PROPOSED PHAROS-GUIDED ATTACK
3.2.1 PROBLEM FORMULATION
In hashing based retrieval, the goal of adversarial attack (i.e., non-targeted attack) is to craft an adversarial example whose retrieval results are irrelevant to the original sample contents. For credibility, this objective can be achieved by maximizing the hash code distance between the adversarial example and its semantically relevant samples, and simultaneously minimizing the distance from irrelevant samples, rather than the only benign sample. Thus, for a given clean image x, the objective of its adversarial example x′ is formulated as follows:
max x′ Np∑ i Nn∑ j [wiD(F (x ′), F (x (p) i ))− wjD(F (x ′), F (x (n) j ))], s.t. ∥x− x ′∥p ≤ ϵ, (2)
where F (·) is the hashing function approximated by the deep model f(·), and D(·, ·) is a distance metric. wi and wj represent distance weights. x (p) i is a positive sample semantically related to the original sample x, and x(n)i is a negative sample of x. Because this maximizing term of Eq. (2) can push the hash code of the adversarial example close to those of unrelated samples and away from semantically relevant samples, optimal attack strength would come true in theory. Np and Nn are the number of the positive samples and the negative samples, respectively. ∥ · ∥p (p = 1, 2,∞) is Lp norm which keeps the pixel difference between the adversarial sample and the original sample no more than ϵ for the imperceptible property of adversarial perturbations.
3.2.2 GENERATION OF PHAROS CODES.
Actually, the maximized objective in Eq. (2) is equivalent to finding a hash code b′, which satisfies:
max b′ ∑ i ∑ j [wiDH(b ′, b (p) i )− wjDH(b ′, b (n) j )], (3)
where DH is Hamming distance measure. b (p) i is the hash code of the positive sample x (p) i , and b (n) j is the binary code of the negative sample x (n) j . Subsequently, we can optimize the adversarial example by minimizing the distance from b′, i.e.,
min x′
DH(b ′, F (x′)). (4)
0 For any hash code b̂ and b̌, we know that DH(b̂, b̌) = 12 (K − b̂ ⊤b̌). Accordingly, we deduce that DH(b̂, b̌) = K −DH(−b̂, b̌). Let hash code b⋆ = −b′, the Eq. (3) and (4) can be reformulated as follows:
min b⋆ ∑ i ∑ j {wi[DH(b⋆, b(p)i )−K]− wj [DH(b ⋆, b (n) j )−K]},
max x′
DH(b ⋆, F (x′))−K.
(5)
Removing the constants, the Eq. (5) can be written as follows:
min b⋆ ∑ i ∑ j [wiDH(b ⋆, b (p) i )− wjDH(b ⋆, b (n) j )],
max x′
DH(b ⋆, F (x′)).
(6)
Due to the binary characteristic of the hash code, we can directly calculate the optimal code (named pharos code b⋆) in the problem (6) by the following Pharos Generation Method (PGM), i.e.,
b⋆ = sign( Np∑ i Nn∑ j (wib (p) i − wjb (n) j )), (7)
where sign(·) is the sign function. The proof of PGM is shown in the Appendix A.1. In addition, we define the wi and wj as follows:
wi = si, wj = 1− sj , (8) where si/j (si/j ∈ [0, 1]) denotes the similarity between the adversarial example and the i/j-th benign sample. If labels yi and yj of xi and xj are given, we can calculate si/j by Dice coefficient, i.e., si/j = 2|y∩yi/j | |y|+|yi/j |
. Otherwise, si/j is usually determined by the optimization objective of the attacked hashing model. For instance, si = 1 and sj = 0 are widely adopted in learning to hash.
3.2.3 GENERATING ADVERSARIAL EXAMPLES.
Since the pharos code is found, the attack problem described in Eq. (2) can be translated into the following objective under the L∞ constraint:
max x′
DH(b ⋆, F (x′)), s.t. ∥x− x′∥∞ ≤ ϵ . (9)
According to DH(b̂, b̌) = 12 (K − b̂ ⊤b̌), the Eq. (9) is equivalent to:
max x′
La = − 1
K (b⋆)⊤fθ(x ′), s.t. ∥x− x′∥∞ ≤ ϵ . (10)
However, Eq. (10) focuses on the sum similarity between fθ(x′) and b⋆, and can not effectively promote that each bit in fθ(x′) differs in sign from b⋆. Intuitively, bits with small differences between fθ(x′) and b⋆ should be given big weights. Hence, we add an weighting vector ω on La to enforce each bit of fθ(x′) away from those bits of b⋆. Formally,
La = − 1
K ω⊤u, u = b⋆ ◦ fθ(x′), (11)
where ◦ represents Hadamard product, and ω has the same dimensions as b⋆. For efficiency, it is not necessary to enforce each element uk of u to approximate −1. In this case, we set uk to converge at t (−1 < t < 0). Thus, the component ωk of ω is defined as
ωk = { uk − 2t, uk > t −t2, otherwise (12)
where uk is the k-th element of u. As shown in Figure 2, t controls the margin between b⋆k and fθ(x
′)k. t is set to −0.8 by default. Furthermore, we follow (Yang et al., 2020) to make our pharos-guided attack focus on different sign between f(x′) and b⋆ for efficiency, i.e.,
La = − 1
π [m ◦ ω]⊤u, u = b⋆ ◦ fθ(x′) (13)
where m ∈ {0, 1}K is a mask and π is the number of non-zero elements in m. The elementmk of m is defined as
mk = { 1, uk > t 0, otherwise (14)
Notably, the pharos-guided attack with Eq. (10) is called PgA†, and that with Eq. (13) is the default PgA. Unlike HAG and SDHA using SGD (Robbins & Monro, 1951) or Adam (Kingma & Ba, 2015) optimizer (Yang et al., 2020; Lu et al., 2021) with quantities of iterations, this paper adopts PGD (Madry et al., 2017) to optimize x′ with T (T = 100 by default) iterations for efficiency, i.e.,
x′T = Sϵ(x′T−1 + η · sign(∆x′T−1La)), x ′ 0 = x+ r, (15)
where η is the step size, and Sϵ project x′ into the ϵ-ball (Madry et al., 2017) of x. r is random noise, sampled from uniform U(−ϵ, ϵ).
4 EXPERIMENTS
4.1 EXPERIMENTAL SETUP
Datasets. We adopt three popular datasets used in hashing based retrieval to evaluate our defense method in extensive experiments: FLICKR-25K (Huiskes & Lew, 2008), NUS-WIDE (Chua et al., 2009) and MS-COCO (Lin et al., 2014). The FLICKR-25K dataset comprises 25,000 Flickr images with 38 labels. We sample 1,000 images as the query set and the remaining regarded as the database, following (Wang et al., 2021a). Moreover, we randomly select 5,000 instances from the database to train hashing models. The NUS-WIDE dataset contains 269,648 images annotated with 81 concepts. We sample a subset with 21 of the most popular concepts, which consists of 195,834 images. 2,100 images are sampled from the subset as queries, while the rest images are regarded as the database. We randomly select 10,500 images from the database for the training set (Wang et al., 2021b). The MS-COCO dataset consists of 82,783 training samples and 40,504 validation samples, where each instance is annotated with at least one of the 80 categories. After combining the training and the validation set, we randomly pick 5,000 instances from them as queries and the rest as a database. For the training set, 10,000 images are randomly selected from the database. In addition, we make an extra experiment on CIFAR-10 (refer to A.2).
Protocols. To evaluate the performance of PgA, we conduct experiments on the standard metric, i.e., Mean Average Precision (MAP) (Yang et al., 2020), and Precision-Recall (PR) curve. Following (Bai et al., 2020), we calculate MAP values on the top 5,000 results from the database.
Baselines. Following (Yang et al., 2020; Bai et al., 2020), we adopt DPH as the default attacked hashing method, which is a generic algorithm in deep hashing-based retrieval. AlexNet (Krizhevsky et al., 2012) is selected as the default backbone network to implement hashing models on FLICKR25K, NUS-WIDE, and MS-COCO. We also evaluate the attack performance of our method against the defense model trained by ATRDH (Wang et al., 2021a) (the only adversarial training algorithm in deep hashing). We compare the proposed algorithm with multiple hashing attack methods, including P2P, DHTA, ProS-GAN, THA, HAG, and SDHA. For targeted attacks, we randomly select a label as the target label which does not share the same category as the true label. Other details of these methods are consistent with the original literature.
Implementation Details. We use stochastic gradient descent (SGD) for target hashing models with the initial learning rate of 0.01 and the momentum 0.9 as optimizers. We fix the mini-batch size of images as 32 and the weight decay parameter as 5× 10−4. All images are resized to 224× 224 and normalized in [0, 1] before feeding into hashing models. For the proposed attack method PgA, we adopt PGD (Bai et al., 2020) to optimize adversarial examples. The step size η and the number of iterations T are set to 1/255 and 100, respectively. The perturbation budget ϵ is set to 8/255. All codes are based on PyTorch 1.12 and are executed on NVIDIA RTX 3090 GPUs.
4.2 ATTACK RESULTS
Table 1 and Table 2 present the attack performance (MAP) of different attack methods on original deep hashing networks without defense and adversarially trained models, respectively. The lower the MAP value, the stronger the attack performance. The ”Clean” in these tables is to query with benign images, so MAP values refer to the original retrieval performance of the hashing model without attack. From Table 1, we can observe that the proposed method can greatly reduce the MAP values on three datasets with the hash bits varying from 16 to 64 and outperforms all other attacks. Compared to DHTA (Bai et al., 2020), the strongest targeted attack in Table 1, our PgA achieves average boosts of 18.68%, 14.66%, and 9.03% for tested bits on FLICKR-25K, NUS-WIDE, and MS-COCO, respectively. Moreover, our method outperforms it by an average of 4.40%, 4.09%, and 2.59% on three datasets, compared with the state-of-the-art non-targeted attack, SDHA. As for the defense model trained by ATRDH (the only adversarial training algorithm in deep hashing), Table 2 shows that all the MAP values of PgA are lower than other attack methods. Even in the face of SDHA, the proposed PgA brings an average improvement of 7.70%, 5.39%, and 6.02% for FLICKR-25K, NUS-WIDE, and MS-COCO, respectively. The superior performance of our method owes to the superiority of the pharos code, which considers the positive and negative samples simultaneously. In contrast, HAG and SDHA merely use the information from benign and positive samples, respectively. Thus, pharos code-based PgA is better than the previous state-of-the-arts.
For a more comprehensive comparison, the PR curves of different methods on three datasets with 32 bits length are shown in Fig. 3 and 4. We can see that the curves of our method are always below all others, demonstrating that our algorithm can attack hashing models more effectively.
In addition, we compare the attack performance of PgA with that of PgA† to evaluate its effectiveness. The results in Table 1 and 2 shows that PgA with Eq. (13) is better than PgA† with Eq. (10) in most cases. Hence, the accelerated operation in Eq. (13) can effectively improve the attack effect.
4.3 EFFICIENCY ANALYSIS
To confirm the high efficiency of the proposed method, we record the MAP and time of various attack methods, where the time denotes the average interval (second per image) to conduct attacks for the test set. It is noted that this time usually includes the training time of the attack model, e.g., ProS-GAN, and THA. The results are summarized in Table 3 and 4. It is observed that our PgA achieves the strongest attack performance and the shortest attack time for all datasets. Tables 3 and 4 are similar in terms of time results, and here we mainly focus on Table 3 for our analysis. Specifically, ProS-GAN has the lowest attack efficiency because it requires a few hours to train a generative network for attack. ProS-GAN takes about 127, 125, and 52 times longer than PgA on FLICKR-25K, NUS-WIDE, and MS-COCO, respectively. Moreover, P2P, DHTA, and HAG have similar attack time and they are much faster than ProS-GAN. Nevertheless, since they require 2000 iterations for gradients, they are still more than 13 times slower than our PgA. In summary, PgA not only outperform all the previous methods in attack performance but also can produce adversarial examples with the fastest speed.
To further verify the high efficiency of our PgA in the same setting, we use PGD-20 to optimize adversarial perturbations for all attack methods, as shown in Table 5. PgA has the same speed as P2P, DHTA, and HAG, because they can directly calculate the target code to guide the generation of adversarial samples extremely fast. However, THA costs a lot of time to learn to construct the target code with a fully-connected network, so it is much slower than PgA. Furthermore, SDHA is less efficient than PgA because of its complex objective function (Lu et al., 2021).
4.4 ANALYSIS ON HYPER-PARAMETERS
Effect of T & Efficiency. Figure 5(a) and 5(b) present attack performance (MAP) with different attack iterations (i.e., T ) of PGD. Overall, the MAP values decrease with increasing T . When T is greater than 20, the attack performance tends to level off for DPH, and 50 for ATRDH. For the same iterations, the attack performance of PgA maintains a large gap with other methods. Therefore, PgA is an efficient tool for evaluating deep hashing networks’ robustness.
Effect of t. Figure 5(c) illustrates the effect of hyper-parameter t on the attack performance. For the DPH model without defense, there is no appreciable change in MAP for different t. For ATRDH, the attack performance shows a small decrease as t increases. Although attack performance is not extremely sensitive to t, picking an appropriate value of t can not be ignored.
4.5 UNIVERSALITY ON DIFFERENT HASHING METHODS
We argue that the proposed attack algorithm is generic to most popular hashing models. To verify this point, we conduct adversarial attacks on multiple hashing methods with 32-bit hash code length, including DPSH (Li et al., 2016), HashNet (Cao et al., 2017), DSDH (Li et al., 2017), DCH (Cao et al., 2018) and CSQ (Yuan et al., 2020), implemented by VGG11. The results are reported in Table 6. It can be seen from the table that our PgA is effective in fooling the illustrated hashing models with better attack performance than others. Firstly, when testing with hashing methods without defense, our PgA exceeds the previous state-of-the-art SDHA in all cases. Especially with DCH, there is a 7.31% gap between PgA and SDHA. Moreover, under the defense of ATRDH, PgA reduces the MAP of all hashing methods to significant minimums. Also, our PgA brings a 24.63% enhancement on the DCH model compared to the SDHA. Thus, the above phenomena demonstrate the universality of the proposed attack method, which can be utilized in most popular hashing algorithms. We make another further evaluation on the defense model trained with PgA, and please refer to A.3.
5 CONCLUSION
In this paper, we proposed the adversarial attack method (i.e., PgA) for efficiently evaluating the adversarial robustness of deep hashing-based retrieval. Specifically, we provided the PGM to fast obtain the pharos code as the optimal representative of the image semantics for the attack in deep hashing. Moreover, PgA took the pharos code as ”label” to guide the non-targeted attack, where the similarity between the pharos code and the hash code of adversarial example was minimized. Besides, we added adaptive weighting into the Hamming distance calculation, which further boosts the strength of PgA. Experiments showed that our algorithm performed state-of-the-art attack performance and efficiency compared to the previous attack methods in deep hashing-based retrieval.
A APPENDIX
A.1 PROOF OF PGM
Theorem pharos code b⋆ which satisfies Eq. (6) can be calculated by the Pharos Generation Method (PGM), i.e.,
b⋆ = arg min b⋆∈{−1,+1}K ∑ i ∑ j [wiDH(b ⋆, b (p) i )− wjDH(b ⋆, b (n) j )]
= sign Np∑ i Nn∑ j (wib (p) i − wjb (n) j ) . proof. We define the following function:
ψ(b) = ∑ i ∑ j [wiDH(b, b (p) i )− wjDH(b, b (n) j )].
As the pharos code b⋆ need to be the optimal solution of the minimizing objective, the above theorem is equivalent to prove the following inequality:
ψ(b) ≥ ψ(b⋆), ∀ b ∈ {−1,+1}K .
Let b = {b1, b2, ..., bK}, then we have
ψ(b) = ∑ i ∑ j [wi 1 2 (K − b⊤b(p)i )− wj 1 2 (K − b⊤b(n)j )]
=− 1 2 ∑ i ∑ j [wib ⊤b (p) i − wjb ⊤b (n) j ] + ξ
=− 1 2 ∑ i ∑ j [wi K∑ k=1 bkb (p) ik − wj K∑ k=1 bkb (n) jk ] + ξ
=− 1 2 ∑ i ∑ j
( K∑
k=1
wibkb (p) ik − K∑ k=1 wjbkb (n) jk
) + ξ
=− 1 2 ∑ i ∑ j K∑ k=1 bk(wib (p) ik − wjb (n) jk ) + ξ
=− 1 2 K∑ k=1 bk ∑ i ∑ j (wib (p) ik − wjb (n) jk ) + ξ,
where ξ is a constant. Similarly,
ψ(b⋆) = −1 2 K∑ k=1 b⋆k ∑ i ∑ j (wib (p) ik − wjb (n) jk ) + ξ.
Due to the nature of absolute value, we have
ψ(b) = −1 2 K∑ k=1 bk ∑ i ∑ j (wib (p) ik − wjb (n) jk ) + ξ
≥− 1 2 K∑ k=1 ∣∣∣∣∣∣bk ∑ i ∑ j (wib (p) ik − wjb (n) jk ) ∣∣∣∣∣∣+ ξ =− 1
2 K∑ k=1 |bk| ∣∣∣∣∣∣ ∑ i ∑ j (wib (p) ik − wjb (n) jk ) ∣∣∣∣∣∣+ ξ =− 1
2 K∑ k=1 ∣∣∣∣∣∣ ∑ i ∑ j (wib (p) ik − wjb (n) jk ) ∣∣∣∣∣∣+ ξ =− 1
2 K∑ k=1 sign( ∑ i ∑ j (wib (p) ik − wjb (n) jk )) ∑ i ∑ j (wib (p) ik − wjb (n) jk ) + ξ
=− 1 2 K∑ k=1 b⋆k ∑ i ∑ j (wib (p) ik − wjb (n) jk ) + ξ
=ψ(b⋆).
That is, ψ(b) ≥ ψ(b⋆). Hence, the Theorem is proved.
A.2 ATTACK RESULTS ON CIFAR-10
Table 7 shows the results of the hashing attack methods on the single-label dataset CIFAR-10 (Cao et al., 2017). We can observe that our PgA is a little bit better than the state-of-the-art SDHA for DPH. However, the proposed PgA outperforms HAG and SDHA over 2.23%. Especially under the case of 64 bits, PgA brings an boost of 4.05% and 10.19% compared to HAG and SDHA, respectively.
A.3 ADVERSARIAL TRAINING
We use the generated adversarial samples for adversarial training to verify whether the proposed method is still valid. The objective of the adversarial training is formulated as follows:
min θ Ladv = Lori − N∑ i=1 1 K (b⋆i ) ⊤fθ(x ′ i), (16)
where b⋆i is the pharos code of the instance xi, and x ′ i is the adversarial example of xi. The latter term in Eq. (16) can rebuild similarity between the adversarial sample and the true semantics. Lori is the original loss function of the deep hashing model, which ensures the basic performance of hashing learning. The experimental results are illustrated in Table 8. The adversarial training does improve the defense capability of the deep hashing model, but our attack method is still valid and significantly outperforms the other methods. | 1. What is the focus and contribution of the paper on adversarial attacks?
2. What are the strengths of the proposed method, particularly in terms of efficiency and effectiveness?
3. What are the weaknesses of the paper regarding its claims and comparisons with other works?
4. Do you have any concerns or questions regarding the proof and theoretical analysis presented in the paper?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes an adversarial attack method called PgA that can efficiently generate adversarial examples that are very much effective for evaluating the adversarial robustness of deep hashing-based retrieval.
Strengths And Weaknesses
Strength:
Proposed method not only finds a better adversarial attack (empirically) but also the time complexity of generation is comparatively less.
Weakness:
The problem motivation is to only generate adversarial examples efficiently. There are other methods that perform the semantically motivated adversarial example generation already, thus this can not be an exact novel contribution. The paper does not motivate well enough why faster adversarial example generation is so important.
I am doubtful about some parts of the proof. "the maximized objective in Eq. (2) is equivalent to finding a hash code b′" -- Could you prove this step? (see question).
Clarity, Quality, Novelty And Reproducibility
Why did you choose the name pharos?
Equation 2, is not equivalent to optimizing Eqn 3 followed by optimization of Eqn 4. I think it is a constraint optimization of equation 3 with the constraint DH(b′, F(x′)) = 0. However, I am not saying that the approach taken by the paper is completely off but some statement (and proof) could be made about this approximation. |
ICLR | Title
Efficient Evaluation of Adversarial Robustness for Deep Hashing based Retrieval
Abstract
Deep hashing has been extensively applied to massive image retrieval due to its efficiency and effectiveness. Recently, several adversarial attacks have been presented to reveal the vulnerability of deep hashing models against adversarial examples. However, existing attack methods suffer in degraded performance or inefficiency because they underutilize the semantic relations between original samples or spend a lot of time learning from these samples. In this paper, we propose a novel Pharos-guided Attack, dubbed PgA, to evaluate the adversarial robustness of deep hashing networks efficiently. Specifically, we design pharos code to represent the semantics of the benign image, which preserves the similarity with semantically related samples and dissimilarity with irrelevant examples. It is proven that we can quickly calculate the pharos code via a simple math formula rather than time-consuming iterative procedures. Thus, PgA can directly conduct a reliable and efficient attack on deep hashing-based retrieval by maximizing the similarity between the hash code of the adversarial example and the pharos code. Extensive experiments on the benchmark datasets verify that the proposed algorithm outperforms the prior state-of-the-arts in both attack strength and speed.
1 INTRODUCTION
It is challenging to rapidly and effectively search for the required information from vast collections in the current era of big data. Learning to hash (hashing) (Wang et al., 2018) has attracted much attention in large-scale image retrieval due to its exceptional benefits in efficient XOR operation and low storage cost by mapping high-dimensional data to compact binary codes. Particularly, deep hashing (Xia et al., 2014; Li et al., 2016; Cao et al., 2017) that learns nonlinear hash functions with deep neural networks (DNNs) has become a predominant image search technique since it delivers better retrieval accuracy than conventional hashing.
Recent works (Yang et al., 2020; Bai et al., 2020; Wang et al., 2021b;a; Zhang et al., 2021; Xiao & Wang, 2021; Lu et al., 2021) have revealed that deep hashing models are susceptible to adversarial examples. Although these imperceptible samples are crafted by adding small perturbations to original samples, they are sufficient to deceive models into making inaccurate predictions. There is no doubt that such malicious attacks pose grave security threats to image retrieval systems based on deep hashing. In a deep hashing-based face recognition system, for instance, adversarial examples can mislead the system into matching the faces of specific individuals in the database, infiltrating the system effectively. Consequently, there is significant demand for research into these security concerns in deep hashing-based retrieval.
A Few studies (Yang et al., 2020; Bai et al., 2020; Wang et al., 2021b;a; Lu et al., 2021) have been con-
ducted on adversarial attacks and adversarial defenses in deep hashing-based retrieval at present. Existing attack techniques have been shown to be effective, but they are neither efficient nor reliable in evaluating the adversarial robustness (i.e., defense performance) of deep hashing networks. Firstly, these methods (Yang et al., 2020; Bai et al., 2020; Lu et al., 2021) suffer in limited attack performances because they do not fully leverage the semantic relevance between available samples. For instance, SDHA (Lu et al., 2021) only reduces the similarity of adversarial samples with their semantically relevant images, ignoring more irrelevant ones. Secondly, though some hashing attack methods simultaneously consider similar and dissimilar pairs, they use time-consuming neural networks to learn discriminative semantic representations from these pairs for the precise semantic attack, e.g., ProS-GAN (Wang et al., 2021b) and THA (Wang et al., 2021a). In this paper, we focus on improving the deficiencies of previous hashing attacks in both effectiveness and efficiency, as shown in Fig. 1. Furthermore, strong adversarial attack methods with high efficiency can provide excellent benchmarks of model robustness and facilitate the development of adversarial defense strategies in deep hashing-based retrieval.
In this study, we propose Pharos-guided Attack (PgA) for efficient adversarial robustness evaluation of deep hashing networks. The core idea is to quickly compute the pharos code, which reflects the semantics of the original image, and then to use the pharos code to direct the generation of the potent adversarial sample. Specifically, we first design an optimal hash code (namely pharos code) as the discriminative representative of the benign image semantics, which maintains both the similarities to semantically relevant samples and the dissimilarities to irrelevant samples. Benefiting from the binary property of hash codes, we prove that the proposed Pharos Generation Method (PGM) can directly calculate the pharos code through a simple mathematical formula (refer to Appendix A.1 for proof). Thus, the pharos codes of the input data are calculated immediately before the adversarial attack. Subsequently, based on the pharos code, it is feasible to carry out an efficient adversarial hashing attack by maximizing the Hamming distance between the hash code of the adversarial example and the pharos code. Due to the excellence of the pharos codes, our attack manner can considerably enhance the efficiency and effectiveness of adversarial robustness verification, as shown in Fig. 1. In summary, our main contributions are as follows:
• We create the pharos code as the precise semantic representative of the original image content to aid in the construction of the adversarial attack framework for deep hashingbased retrieval. It should be emphasized that our proven mathematical formula in PGM can generate the pharos code instantly.
• A simple pharos-guided attack algorithm is provided, i.e., PgA, which is an efficient and reliable method to evaluate the adversarial robustness of deep hashing networks.
• Extensive experiments demonstrate that PgA can be applied to deep hashing frameworks and achieves state-of-the-art attack performance with high efficiency.
2 RELATED WORK
In this section, we briefly review most relevant works in deep hashing-based image retrieval, adversarial attacks and adversarial training for defense.
Deep Hashing based Image Retrieval. With the remarkable success of deep learning on many vision tasks, deep hashing methods have been well developed for large-scale image retrieval, yielding superior performances than the traditional hashing methods based on hand-crafted features. The pioneering CNNH (Xia et al., 2014) adopts a two-stage strategy, i.e., hash code generation of training data and hash function construction with DNN. Recently, deep hashing methods (Lai et al., 2015; Zhu et al., 2016; Li et al., 2016; Liu et al., 2016; Li et al., 2017; Cao et al., 2017; Jiang & Li, 2018; Cao et al., 2018; Su et al., 2018; Wang et al., 2021c; Doan et al., 2022) focus on joint feature learning and hash code encoding into an end-to-end DNN for the better quality of hash codes. A notable work is DPSH (Li et al., 2016), which simultaneously learns the visual features of data points and preserves their semantic similarity with a pairwise-similarity loss. To alleviate data imbalance between positive and negative pairs, HashNet (Cao et al., 2017) adopts a weighted strategy in pairwise loss functions. Different from the pairwise similarity learning, CSQ (Yuan et al., 2020) can generate high-quality hash codes by enforcing them close to pre-defined hash centers.
Adversarial Attack. In image classification, numerous adversarial attack methods (Szegedy et al., 2014; Goodfellow et al., 2015; Kurakin et al., 2017; Moosavi-Dezfooli et al., 2016; Madry et al., 2017; Carlini & Wagner, 2017; Dong et al., 2018; Papernot et al., 2017; Chen et al., 2017; Ilyas et al., 2018) have been developed to fool the well-trained classifiers by constructing adversarial examples, since the intriguing properties (Szegedy et al., 2014; Biggio et al., 2013) of adversarial samples are discovered. For example, FGSM (Goodfellow et al., 2015) crafts adversarial samples by maximizing the loss along the gradient direction with a large step. As the multi-step variant of FGSM, I-FGSM (Kurakin et al., 2017) and PGD (Madry et al., 2017) iteratively update perturbations with small steps for better attack performance.
Recently, researchers have extended adversarial attacks to deep hashing-based image retrieval (Yang et al., 2020; Bai et al., 2020; Wang et al., 2021b;a; Lu et al., 2021; Zhang et al., 2021). Existing adversarial attack methods for deep hashing can be organized into two categories: non-targeted attack and targeted attack. For a non-targeted attack in hashing-based retrieval, its goal is to generate adversarial examples that can confuse the hashing model to retrieve results irrelevant to the original image (Bai et al., 2020; Wang et al., 2021b). Achieving the non-targeted attack by minimizing the hash code similarity between the adversarial example and the original sample, Yang et al. (Yang et al., 2020) proposed HAG, the first adversarial attack method on deep hashing. SDHA (Lu et al., 2021) generates more effective adversarial queries due to staying away from the relevant images of the benign sample, while HAG only takes the original image into consideration. As for the targeted attack, it aims to construct adversarial examples whose retrieved images are semantically relevant to the given target label (Bai et al., 2020; Wang et al., 2021b). To achieve the targeted attack, P2P and DHTA (Bai et al., 2020) obtain the anchor code as the representative of the target label to direct the generation of the adversarial sample. Subsequently, Wang et al. (Wang et al., 2021a) defined the prototype code as the target code to reach a better targeted attack, which is called THA in this paper. ProS-GAN (Wang et al., 2021b) designs a generative framework for efficient targeted hashing attack under the test phase. Different from the above white-box scenarios, Xiao et al. (Xiao & Wang, 2021) proposed the targeted black-box attack NAG by enhancing the transferability of adversarial examples.
Unlike the prior work (Bai et al., 2020) where the anchor code is obtained by a few instances with the same semantics, we proposed the pharos code, which preserves the semantic similarity with relevant samples and dissimilarity with irrelevant samples. Moreover, we use the proven mathematical formula (i.e., PGM) to instantly calculate the pharos code before the adversarial attack, instead of learning the prototype codes (Wang et al., 2021b;a) through time-consuming neural networks as ProS-GAN and THA do. Hence, our pharos code is more suited for efficient adversarial robustness evaluation of deep hashing models.
Adversarial Training. Adversarial training (Goodfellow et al., 2015; Madry et al., 2017) aims to augment the training data with generated adversarial examples, which is the most robust training strategy against various adversarial attacks. Thus, modifications (Zhang et al., 2019; Wong et al., 2020; Pang et al., 2021) and applications (Li et al., 2021; Utrera et al., 2021) of adversarial training have emerged to improve the robustness and generalization of DNNs. For deep hashing-based retrieval, (Wang et al., 2021a) proposed the first effective adversarial training algorithm based on the targeted attack (dubbed ATRDH here) by narrowing the semantic gap between the adversarial samples and the original samples in the Hamming space.
3 METHOD
3.1 PRELIMINARIES
We consider that an attacked hashing model F learns from a training set of N data points O = {(xi,yi)}Ni=1, where xi indicates i-th image, and yi = [yi1, yi2, ..., yiC ] ∈ {0, 1}C denotes a label vector of xi. C indicates the total number of classes in the dataset. yij = 1 means that xi belongs to the j-th class. If xi and xj share at least one common label, they are semantically similar, i.e., xj is the positive sample of xi. Otherwise, they are semantically dissimilar and xj is the negative sample of xi.
Deep hashing aims at employing DNNs to transform high dimensional data into compact binary codes and simultaneously preserves their semantic similarities. For the given hashing model F , the
hash code bi of the instance xi is generated as:
bi = F (xi) = sign(hi) = sign(fθ(xi)), s.t. bi ∈ {−1, 1}K , (1)
where K represents the hash code length, and f(·) with parameter θ is a DNN to approximate hash function F (·). The final binary code bi is obtained by applying the sign(·) on the output hi of fθ(xi). Typically, f(·) is implemented by a convolutional neural network (CNN) and adopts the tanh activation to simulate sign function at the output layer.
3.2 THE PROPOSED PHAROS-GUIDED ATTACK
3.2.1 PROBLEM FORMULATION
In hashing based retrieval, the goal of adversarial attack (i.e., non-targeted attack) is to craft an adversarial example whose retrieval results are irrelevant to the original sample contents. For credibility, this objective can be achieved by maximizing the hash code distance between the adversarial example and its semantically relevant samples, and simultaneously minimizing the distance from irrelevant samples, rather than the only benign sample. Thus, for a given clean image x, the objective of its adversarial example x′ is formulated as follows:
max x′ Np∑ i Nn∑ j [wiD(F (x ′), F (x (p) i ))− wjD(F (x ′), F (x (n) j ))], s.t. ∥x− x ′∥p ≤ ϵ, (2)
where F (·) is the hashing function approximated by the deep model f(·), and D(·, ·) is a distance metric. wi and wj represent distance weights. x (p) i is a positive sample semantically related to the original sample x, and x(n)i is a negative sample of x. Because this maximizing term of Eq. (2) can push the hash code of the adversarial example close to those of unrelated samples and away from semantically relevant samples, optimal attack strength would come true in theory. Np and Nn are the number of the positive samples and the negative samples, respectively. ∥ · ∥p (p = 1, 2,∞) is Lp norm which keeps the pixel difference between the adversarial sample and the original sample no more than ϵ for the imperceptible property of adversarial perturbations.
3.2.2 GENERATION OF PHAROS CODES.
Actually, the maximized objective in Eq. (2) is equivalent to finding a hash code b′, which satisfies:
max b′ ∑ i ∑ j [wiDH(b ′, b (p) i )− wjDH(b ′, b (n) j )], (3)
where DH is Hamming distance measure. b (p) i is the hash code of the positive sample x (p) i , and b (n) j is the binary code of the negative sample x (n) j . Subsequently, we can optimize the adversarial example by minimizing the distance from b′, i.e.,
min x′
DH(b ′, F (x′)). (4)
0 For any hash code b̂ and b̌, we know that DH(b̂, b̌) = 12 (K − b̂ ⊤b̌). Accordingly, we deduce that DH(b̂, b̌) = K −DH(−b̂, b̌). Let hash code b⋆ = −b′, the Eq. (3) and (4) can be reformulated as follows:
min b⋆ ∑ i ∑ j {wi[DH(b⋆, b(p)i )−K]− wj [DH(b ⋆, b (n) j )−K]},
max x′
DH(b ⋆, F (x′))−K.
(5)
Removing the constants, the Eq. (5) can be written as follows:
min b⋆ ∑ i ∑ j [wiDH(b ⋆, b (p) i )− wjDH(b ⋆, b (n) j )],
max x′
DH(b ⋆, F (x′)).
(6)
Due to the binary characteristic of the hash code, we can directly calculate the optimal code (named pharos code b⋆) in the problem (6) by the following Pharos Generation Method (PGM), i.e.,
b⋆ = sign( Np∑ i Nn∑ j (wib (p) i − wjb (n) j )), (7)
where sign(·) is the sign function. The proof of PGM is shown in the Appendix A.1. In addition, we define the wi and wj as follows:
wi = si, wj = 1− sj , (8) where si/j (si/j ∈ [0, 1]) denotes the similarity between the adversarial example and the i/j-th benign sample. If labels yi and yj of xi and xj are given, we can calculate si/j by Dice coefficient, i.e., si/j = 2|y∩yi/j | |y|+|yi/j |
. Otherwise, si/j is usually determined by the optimization objective of the attacked hashing model. For instance, si = 1 and sj = 0 are widely adopted in learning to hash.
3.2.3 GENERATING ADVERSARIAL EXAMPLES.
Since the pharos code is found, the attack problem described in Eq. (2) can be translated into the following objective under the L∞ constraint:
max x′
DH(b ⋆, F (x′)), s.t. ∥x− x′∥∞ ≤ ϵ . (9)
According to DH(b̂, b̌) = 12 (K − b̂ ⊤b̌), the Eq. (9) is equivalent to:
max x′
La = − 1
K (b⋆)⊤fθ(x ′), s.t. ∥x− x′∥∞ ≤ ϵ . (10)
However, Eq. (10) focuses on the sum similarity between fθ(x′) and b⋆, and can not effectively promote that each bit in fθ(x′) differs in sign from b⋆. Intuitively, bits with small differences between fθ(x′) and b⋆ should be given big weights. Hence, we add an weighting vector ω on La to enforce each bit of fθ(x′) away from those bits of b⋆. Formally,
La = − 1
K ω⊤u, u = b⋆ ◦ fθ(x′), (11)
where ◦ represents Hadamard product, and ω has the same dimensions as b⋆. For efficiency, it is not necessary to enforce each element uk of u to approximate −1. In this case, we set uk to converge at t (−1 < t < 0). Thus, the component ωk of ω is defined as
ωk = { uk − 2t, uk > t −t2, otherwise (12)
where uk is the k-th element of u. As shown in Figure 2, t controls the margin between b⋆k and fθ(x
′)k. t is set to −0.8 by default. Furthermore, we follow (Yang et al., 2020) to make our pharos-guided attack focus on different sign between f(x′) and b⋆ for efficiency, i.e.,
La = − 1
π [m ◦ ω]⊤u, u = b⋆ ◦ fθ(x′) (13)
where m ∈ {0, 1}K is a mask and π is the number of non-zero elements in m. The elementmk of m is defined as
mk = { 1, uk > t 0, otherwise (14)
Notably, the pharos-guided attack with Eq. (10) is called PgA†, and that with Eq. (13) is the default PgA. Unlike HAG and SDHA using SGD (Robbins & Monro, 1951) or Adam (Kingma & Ba, 2015) optimizer (Yang et al., 2020; Lu et al., 2021) with quantities of iterations, this paper adopts PGD (Madry et al., 2017) to optimize x′ with T (T = 100 by default) iterations for efficiency, i.e.,
x′T = Sϵ(x′T−1 + η · sign(∆x′T−1La)), x ′ 0 = x+ r, (15)
where η is the step size, and Sϵ project x′ into the ϵ-ball (Madry et al., 2017) of x. r is random noise, sampled from uniform U(−ϵ, ϵ).
4 EXPERIMENTS
4.1 EXPERIMENTAL SETUP
Datasets. We adopt three popular datasets used in hashing based retrieval to evaluate our defense method in extensive experiments: FLICKR-25K (Huiskes & Lew, 2008), NUS-WIDE (Chua et al., 2009) and MS-COCO (Lin et al., 2014). The FLICKR-25K dataset comprises 25,000 Flickr images with 38 labels. We sample 1,000 images as the query set and the remaining regarded as the database, following (Wang et al., 2021a). Moreover, we randomly select 5,000 instances from the database to train hashing models. The NUS-WIDE dataset contains 269,648 images annotated with 81 concepts. We sample a subset with 21 of the most popular concepts, which consists of 195,834 images. 2,100 images are sampled from the subset as queries, while the rest images are regarded as the database. We randomly select 10,500 images from the database for the training set (Wang et al., 2021b). The MS-COCO dataset consists of 82,783 training samples and 40,504 validation samples, where each instance is annotated with at least one of the 80 categories. After combining the training and the validation set, we randomly pick 5,000 instances from them as queries and the rest as a database. For the training set, 10,000 images are randomly selected from the database. In addition, we make an extra experiment on CIFAR-10 (refer to A.2).
Protocols. To evaluate the performance of PgA, we conduct experiments on the standard metric, i.e., Mean Average Precision (MAP) (Yang et al., 2020), and Precision-Recall (PR) curve. Following (Bai et al., 2020), we calculate MAP values on the top 5,000 results from the database.
Baselines. Following (Yang et al., 2020; Bai et al., 2020), we adopt DPH as the default attacked hashing method, which is a generic algorithm in deep hashing-based retrieval. AlexNet (Krizhevsky et al., 2012) is selected as the default backbone network to implement hashing models on FLICKR25K, NUS-WIDE, and MS-COCO. We also evaluate the attack performance of our method against the defense model trained by ATRDH (Wang et al., 2021a) (the only adversarial training algorithm in deep hashing). We compare the proposed algorithm with multiple hashing attack methods, including P2P, DHTA, ProS-GAN, THA, HAG, and SDHA. For targeted attacks, we randomly select a label as the target label which does not share the same category as the true label. Other details of these methods are consistent with the original literature.
Implementation Details. We use stochastic gradient descent (SGD) for target hashing models with the initial learning rate of 0.01 and the momentum 0.9 as optimizers. We fix the mini-batch size of images as 32 and the weight decay parameter as 5× 10−4. All images are resized to 224× 224 and normalized in [0, 1] before feeding into hashing models. For the proposed attack method PgA, we adopt PGD (Bai et al., 2020) to optimize adversarial examples. The step size η and the number of iterations T are set to 1/255 and 100, respectively. The perturbation budget ϵ is set to 8/255. All codes are based on PyTorch 1.12 and are executed on NVIDIA RTX 3090 GPUs.
4.2 ATTACK RESULTS
Table 1 and Table 2 present the attack performance (MAP) of different attack methods on original deep hashing networks without defense and adversarially trained models, respectively. The lower the MAP value, the stronger the attack performance. The ”Clean” in these tables is to query with benign images, so MAP values refer to the original retrieval performance of the hashing model without attack. From Table 1, we can observe that the proposed method can greatly reduce the MAP values on three datasets with the hash bits varying from 16 to 64 and outperforms all other attacks. Compared to DHTA (Bai et al., 2020), the strongest targeted attack in Table 1, our PgA achieves average boosts of 18.68%, 14.66%, and 9.03% for tested bits on FLICKR-25K, NUS-WIDE, and MS-COCO, respectively. Moreover, our method outperforms it by an average of 4.40%, 4.09%, and 2.59% on three datasets, compared with the state-of-the-art non-targeted attack, SDHA. As for the defense model trained by ATRDH (the only adversarial training algorithm in deep hashing), Table 2 shows that all the MAP values of PgA are lower than other attack methods. Even in the face of SDHA, the proposed PgA brings an average improvement of 7.70%, 5.39%, and 6.02% for FLICKR-25K, NUS-WIDE, and MS-COCO, respectively. The superior performance of our method owes to the superiority of the pharos code, which considers the positive and negative samples simultaneously. In contrast, HAG and SDHA merely use the information from benign and positive samples, respectively. Thus, pharos code-based PgA is better than the previous state-of-the-arts.
For a more comprehensive comparison, the PR curves of different methods on three datasets with 32 bits length are shown in Fig. 3 and 4. We can see that the curves of our method are always below all others, demonstrating that our algorithm can attack hashing models more effectively.
In addition, we compare the attack performance of PgA with that of PgA† to evaluate its effectiveness. The results in Table 1 and 2 shows that PgA with Eq. (13) is better than PgA† with Eq. (10) in most cases. Hence, the accelerated operation in Eq. (13) can effectively improve the attack effect.
4.3 EFFICIENCY ANALYSIS
To confirm the high efficiency of the proposed method, we record the MAP and time of various attack methods, where the time denotes the average interval (second per image) to conduct attacks for the test set. It is noted that this time usually includes the training time of the attack model, e.g., ProS-GAN, and THA. The results are summarized in Table 3 and 4. It is observed that our PgA achieves the strongest attack performance and the shortest attack time for all datasets. Tables 3 and 4 are similar in terms of time results, and here we mainly focus on Table 3 for our analysis. Specifically, ProS-GAN has the lowest attack efficiency because it requires a few hours to train a generative network for attack. ProS-GAN takes about 127, 125, and 52 times longer than PgA on FLICKR-25K, NUS-WIDE, and MS-COCO, respectively. Moreover, P2P, DHTA, and HAG have similar attack time and they are much faster than ProS-GAN. Nevertheless, since they require 2000 iterations for gradients, they are still more than 13 times slower than our PgA. In summary, PgA not only outperform all the previous methods in attack performance but also can produce adversarial examples with the fastest speed.
To further verify the high efficiency of our PgA in the same setting, we use PGD-20 to optimize adversarial perturbations for all attack methods, as shown in Table 5. PgA has the same speed as P2P, DHTA, and HAG, because they can directly calculate the target code to guide the generation of adversarial samples extremely fast. However, THA costs a lot of time to learn to construct the target code with a fully-connected network, so it is much slower than PgA. Furthermore, SDHA is less efficient than PgA because of its complex objective function (Lu et al., 2021).
4.4 ANALYSIS ON HYPER-PARAMETERS
Effect of T & Efficiency. Figure 5(a) and 5(b) present attack performance (MAP) with different attack iterations (i.e., T ) of PGD. Overall, the MAP values decrease with increasing T . When T is greater than 20, the attack performance tends to level off for DPH, and 50 for ATRDH. For the same iterations, the attack performance of PgA maintains a large gap with other methods. Therefore, PgA is an efficient tool for evaluating deep hashing networks’ robustness.
Effect of t. Figure 5(c) illustrates the effect of hyper-parameter t on the attack performance. For the DPH model without defense, there is no appreciable change in MAP for different t. For ATRDH, the attack performance shows a small decrease as t increases. Although attack performance is not extremely sensitive to t, picking an appropriate value of t can not be ignored.
4.5 UNIVERSALITY ON DIFFERENT HASHING METHODS
We argue that the proposed attack algorithm is generic to most popular hashing models. To verify this point, we conduct adversarial attacks on multiple hashing methods with 32-bit hash code length, including DPSH (Li et al., 2016), HashNet (Cao et al., 2017), DSDH (Li et al., 2017), DCH (Cao et al., 2018) and CSQ (Yuan et al., 2020), implemented by VGG11. The results are reported in Table 6. It can be seen from the table that our PgA is effective in fooling the illustrated hashing models with better attack performance than others. Firstly, when testing with hashing methods without defense, our PgA exceeds the previous state-of-the-art SDHA in all cases. Especially with DCH, there is a 7.31% gap between PgA and SDHA. Moreover, under the defense of ATRDH, PgA reduces the MAP of all hashing methods to significant minimums. Also, our PgA brings a 24.63% enhancement on the DCH model compared to the SDHA. Thus, the above phenomena demonstrate the universality of the proposed attack method, which can be utilized in most popular hashing algorithms. We make another further evaluation on the defense model trained with PgA, and please refer to A.3.
5 CONCLUSION
In this paper, we proposed the adversarial attack method (i.e., PgA) for efficiently evaluating the adversarial robustness of deep hashing-based retrieval. Specifically, we provided the PGM to fast obtain the pharos code as the optimal representative of the image semantics for the attack in deep hashing. Moreover, PgA took the pharos code as ”label” to guide the non-targeted attack, where the similarity between the pharos code and the hash code of adversarial example was minimized. Besides, we added adaptive weighting into the Hamming distance calculation, which further boosts the strength of PgA. Experiments showed that our algorithm performed state-of-the-art attack performance and efficiency compared to the previous attack methods in deep hashing-based retrieval.
A APPENDIX
A.1 PROOF OF PGM
Theorem pharos code b⋆ which satisfies Eq. (6) can be calculated by the Pharos Generation Method (PGM), i.e.,
b⋆ = arg min b⋆∈{−1,+1}K ∑ i ∑ j [wiDH(b ⋆, b (p) i )− wjDH(b ⋆, b (n) j )]
= sign Np∑ i Nn∑ j (wib (p) i − wjb (n) j ) . proof. We define the following function:
ψ(b) = ∑ i ∑ j [wiDH(b, b (p) i )− wjDH(b, b (n) j )].
As the pharos code b⋆ need to be the optimal solution of the minimizing objective, the above theorem is equivalent to prove the following inequality:
ψ(b) ≥ ψ(b⋆), ∀ b ∈ {−1,+1}K .
Let b = {b1, b2, ..., bK}, then we have
ψ(b) = ∑ i ∑ j [wi 1 2 (K − b⊤b(p)i )− wj 1 2 (K − b⊤b(n)j )]
=− 1 2 ∑ i ∑ j [wib ⊤b (p) i − wjb ⊤b (n) j ] + ξ
=− 1 2 ∑ i ∑ j [wi K∑ k=1 bkb (p) ik − wj K∑ k=1 bkb (n) jk ] + ξ
=− 1 2 ∑ i ∑ j
( K∑
k=1
wibkb (p) ik − K∑ k=1 wjbkb (n) jk
) + ξ
=− 1 2 ∑ i ∑ j K∑ k=1 bk(wib (p) ik − wjb (n) jk ) + ξ
=− 1 2 K∑ k=1 bk ∑ i ∑ j (wib (p) ik − wjb (n) jk ) + ξ,
where ξ is a constant. Similarly,
ψ(b⋆) = −1 2 K∑ k=1 b⋆k ∑ i ∑ j (wib (p) ik − wjb (n) jk ) + ξ.
Due to the nature of absolute value, we have
ψ(b) = −1 2 K∑ k=1 bk ∑ i ∑ j (wib (p) ik − wjb (n) jk ) + ξ
≥− 1 2 K∑ k=1 ∣∣∣∣∣∣bk ∑ i ∑ j (wib (p) ik − wjb (n) jk ) ∣∣∣∣∣∣+ ξ =− 1
2 K∑ k=1 |bk| ∣∣∣∣∣∣ ∑ i ∑ j (wib (p) ik − wjb (n) jk ) ∣∣∣∣∣∣+ ξ =− 1
2 K∑ k=1 ∣∣∣∣∣∣ ∑ i ∑ j (wib (p) ik − wjb (n) jk ) ∣∣∣∣∣∣+ ξ =− 1
2 K∑ k=1 sign( ∑ i ∑ j (wib (p) ik − wjb (n) jk )) ∑ i ∑ j (wib (p) ik − wjb (n) jk ) + ξ
=− 1 2 K∑ k=1 b⋆k ∑ i ∑ j (wib (p) ik − wjb (n) jk ) + ξ
=ψ(b⋆).
That is, ψ(b) ≥ ψ(b⋆). Hence, the Theorem is proved.
A.2 ATTACK RESULTS ON CIFAR-10
Table 7 shows the results of the hashing attack methods on the single-label dataset CIFAR-10 (Cao et al., 2017). We can observe that our PgA is a little bit better than the state-of-the-art SDHA for DPH. However, the proposed PgA outperforms HAG and SDHA over 2.23%. Especially under the case of 64 bits, PgA brings an boost of 4.05% and 10.19% compared to HAG and SDHA, respectively.
A.3 ADVERSARIAL TRAINING
We use the generated adversarial samples for adversarial training to verify whether the proposed method is still valid. The objective of the adversarial training is formulated as follows:
min θ Ladv = Lori − N∑ i=1 1 K (b⋆i ) ⊤fθ(x ′ i), (16)
where b⋆i is the pharos code of the instance xi, and x ′ i is the adversarial example of xi. The latter term in Eq. (16) can rebuild similarity between the adversarial sample and the true semantics. Lori is the original loss function of the deep hashing model, which ensures the basic performance of hashing learning. The experimental results are illustrated in Table 8. The adversarial training does improve the defense capability of the deep hashing model, but our attack method is still valid and significantly outperforms the other methods. | 1. What is the focus and contribution of the paper on adversarial attacks?
2. What are the strengths of the proposed approach, particularly regarding its efficiency and effectiveness?
3. What are the weaknesses of the paper, especially regarding its explanations and experiments?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
In this paper, the authors propose Pharos-guided Attack (PgA) for efficient adversarial robustness evaluation of deep hashing networks. Specifically, they provided the PGM to fast obtain the pharos code as the representative of the image semantics for the attack in deep hashing. Moreover, PgA took the pharos code as ‘’label” to guide the non-targeted attack, where the similarity between the pharos code and the hash code of adversarial example was minimized. Besides, they added adaptive weighting into the Hamming distance calculation, which further boosts the strength of PgA.
Strengths And Weaknesses
Strength: 1. The paper is well organized and linguistically standardized. 2. Experiments demonstrate that PgA can be applied to deep hashing frameworks and achieves state-of-the-art attack performance with high efficiency. Weaknesses:
Some contents are not clearly introduced. For example: the authors claim that the proposed pharos code “Preserves the semantic similarity with relevant samples and dissimilarity with irrelevant samples”; “as the precise semantic representative of the original image”. However, the author does not further explain these points in the following. The authors should explain these clearly.
The experiments are incomplete and lack ablation experiments to demonstrate the effective part of the PGM model.
The TopN-precision curves are essential for the performance evaluation of the model in hash retrieval. The authors should supplement this part of the content.
Clarity, Quality, Novelty And Reproducibility
This paper proposes a novel Pharos-guided Attack, to evaluate the adversarial robustness of deep hashing networks efficiently, which has certain innovation. Some details are provided in this paper to improve the reproducibility of the method. The paper is well organized but language can be improved. However, some contents are not clearly introduced and experiments are not sufficient. |
ICLR | Title
Efficient Evaluation of Adversarial Robustness for Deep Hashing based Retrieval
Abstract
Deep hashing has been extensively applied to massive image retrieval due to its efficiency and effectiveness. Recently, several adversarial attacks have been presented to reveal the vulnerability of deep hashing models against adversarial examples. However, existing attack methods suffer in degraded performance or inefficiency because they underutilize the semantic relations between original samples or spend a lot of time learning from these samples. In this paper, we propose a novel Pharos-guided Attack, dubbed PgA, to evaluate the adversarial robustness of deep hashing networks efficiently. Specifically, we design pharos code to represent the semantics of the benign image, which preserves the similarity with semantically related samples and dissimilarity with irrelevant examples. It is proven that we can quickly calculate the pharos code via a simple math formula rather than time-consuming iterative procedures. Thus, PgA can directly conduct a reliable and efficient attack on deep hashing-based retrieval by maximizing the similarity between the hash code of the adversarial example and the pharos code. Extensive experiments on the benchmark datasets verify that the proposed algorithm outperforms the prior state-of-the-arts in both attack strength and speed.
1 INTRODUCTION
It is challenging to rapidly and effectively search for the required information from vast collections in the current era of big data. Learning to hash (hashing) (Wang et al., 2018) has attracted much attention in large-scale image retrieval due to its exceptional benefits in efficient XOR operation and low storage cost by mapping high-dimensional data to compact binary codes. Particularly, deep hashing (Xia et al., 2014; Li et al., 2016; Cao et al., 2017) that learns nonlinear hash functions with deep neural networks (DNNs) has become a predominant image search technique since it delivers better retrieval accuracy than conventional hashing.
Recent works (Yang et al., 2020; Bai et al., 2020; Wang et al., 2021b;a; Zhang et al., 2021; Xiao & Wang, 2021; Lu et al., 2021) have revealed that deep hashing models are susceptible to adversarial examples. Although these imperceptible samples are crafted by adding small perturbations to original samples, they are sufficient to deceive models into making inaccurate predictions. There is no doubt that such malicious attacks pose grave security threats to image retrieval systems based on deep hashing. In a deep hashing-based face recognition system, for instance, adversarial examples can mislead the system into matching the faces of specific individuals in the database, infiltrating the system effectively. Consequently, there is significant demand for research into these security concerns in deep hashing-based retrieval.
A Few studies (Yang et al., 2020; Bai et al., 2020; Wang et al., 2021b;a; Lu et al., 2021) have been con-
ducted on adversarial attacks and adversarial defenses in deep hashing-based retrieval at present. Existing attack techniques have been shown to be effective, but they are neither efficient nor reliable in evaluating the adversarial robustness (i.e., defense performance) of deep hashing networks. Firstly, these methods (Yang et al., 2020; Bai et al., 2020; Lu et al., 2021) suffer in limited attack performances because they do not fully leverage the semantic relevance between available samples. For instance, SDHA (Lu et al., 2021) only reduces the similarity of adversarial samples with their semantically relevant images, ignoring more irrelevant ones. Secondly, though some hashing attack methods simultaneously consider similar and dissimilar pairs, they use time-consuming neural networks to learn discriminative semantic representations from these pairs for the precise semantic attack, e.g., ProS-GAN (Wang et al., 2021b) and THA (Wang et al., 2021a). In this paper, we focus on improving the deficiencies of previous hashing attacks in both effectiveness and efficiency, as shown in Fig. 1. Furthermore, strong adversarial attack methods with high efficiency can provide excellent benchmarks of model robustness and facilitate the development of adversarial defense strategies in deep hashing-based retrieval.
In this study, we propose Pharos-guided Attack (PgA) for efficient adversarial robustness evaluation of deep hashing networks. The core idea is to quickly compute the pharos code, which reflects the semantics of the original image, and then to use the pharos code to direct the generation of the potent adversarial sample. Specifically, we first design an optimal hash code (namely pharos code) as the discriminative representative of the benign image semantics, which maintains both the similarities to semantically relevant samples and the dissimilarities to irrelevant samples. Benefiting from the binary property of hash codes, we prove that the proposed Pharos Generation Method (PGM) can directly calculate the pharos code through a simple mathematical formula (refer to Appendix A.1 for proof). Thus, the pharos codes of the input data are calculated immediately before the adversarial attack. Subsequently, based on the pharos code, it is feasible to carry out an efficient adversarial hashing attack by maximizing the Hamming distance between the hash code of the adversarial example and the pharos code. Due to the excellence of the pharos codes, our attack manner can considerably enhance the efficiency and effectiveness of adversarial robustness verification, as shown in Fig. 1. In summary, our main contributions are as follows:
• We create the pharos code as the precise semantic representative of the original image content to aid in the construction of the adversarial attack framework for deep hashingbased retrieval. It should be emphasized that our proven mathematical formula in PGM can generate the pharos code instantly.
• A simple pharos-guided attack algorithm is provided, i.e., PgA, which is an efficient and reliable method to evaluate the adversarial robustness of deep hashing networks.
• Extensive experiments demonstrate that PgA can be applied to deep hashing frameworks and achieves state-of-the-art attack performance with high efficiency.
2 RELATED WORK
In this section, we briefly review most relevant works in deep hashing-based image retrieval, adversarial attacks and adversarial training for defense.
Deep Hashing based Image Retrieval. With the remarkable success of deep learning on many vision tasks, deep hashing methods have been well developed for large-scale image retrieval, yielding superior performances than the traditional hashing methods based on hand-crafted features. The pioneering CNNH (Xia et al., 2014) adopts a two-stage strategy, i.e., hash code generation of training data and hash function construction with DNN. Recently, deep hashing methods (Lai et al., 2015; Zhu et al., 2016; Li et al., 2016; Liu et al., 2016; Li et al., 2017; Cao et al., 2017; Jiang & Li, 2018; Cao et al., 2018; Su et al., 2018; Wang et al., 2021c; Doan et al., 2022) focus on joint feature learning and hash code encoding into an end-to-end DNN for the better quality of hash codes. A notable work is DPSH (Li et al., 2016), which simultaneously learns the visual features of data points and preserves their semantic similarity with a pairwise-similarity loss. To alleviate data imbalance between positive and negative pairs, HashNet (Cao et al., 2017) adopts a weighted strategy in pairwise loss functions. Different from the pairwise similarity learning, CSQ (Yuan et al., 2020) can generate high-quality hash codes by enforcing them close to pre-defined hash centers.
Adversarial Attack. In image classification, numerous adversarial attack methods (Szegedy et al., 2014; Goodfellow et al., 2015; Kurakin et al., 2017; Moosavi-Dezfooli et al., 2016; Madry et al., 2017; Carlini & Wagner, 2017; Dong et al., 2018; Papernot et al., 2017; Chen et al., 2017; Ilyas et al., 2018) have been developed to fool the well-trained classifiers by constructing adversarial examples, since the intriguing properties (Szegedy et al., 2014; Biggio et al., 2013) of adversarial samples are discovered. For example, FGSM (Goodfellow et al., 2015) crafts adversarial samples by maximizing the loss along the gradient direction with a large step. As the multi-step variant of FGSM, I-FGSM (Kurakin et al., 2017) and PGD (Madry et al., 2017) iteratively update perturbations with small steps for better attack performance.
Recently, researchers have extended adversarial attacks to deep hashing-based image retrieval (Yang et al., 2020; Bai et al., 2020; Wang et al., 2021b;a; Lu et al., 2021; Zhang et al., 2021). Existing adversarial attack methods for deep hashing can be organized into two categories: non-targeted attack and targeted attack. For a non-targeted attack in hashing-based retrieval, its goal is to generate adversarial examples that can confuse the hashing model to retrieve results irrelevant to the original image (Bai et al., 2020; Wang et al., 2021b). Achieving the non-targeted attack by minimizing the hash code similarity between the adversarial example and the original sample, Yang et al. (Yang et al., 2020) proposed HAG, the first adversarial attack method on deep hashing. SDHA (Lu et al., 2021) generates more effective adversarial queries due to staying away from the relevant images of the benign sample, while HAG only takes the original image into consideration. As for the targeted attack, it aims to construct adversarial examples whose retrieved images are semantically relevant to the given target label (Bai et al., 2020; Wang et al., 2021b). To achieve the targeted attack, P2P and DHTA (Bai et al., 2020) obtain the anchor code as the representative of the target label to direct the generation of the adversarial sample. Subsequently, Wang et al. (Wang et al., 2021a) defined the prototype code as the target code to reach a better targeted attack, which is called THA in this paper. ProS-GAN (Wang et al., 2021b) designs a generative framework for efficient targeted hashing attack under the test phase. Different from the above white-box scenarios, Xiao et al. (Xiao & Wang, 2021) proposed the targeted black-box attack NAG by enhancing the transferability of adversarial examples.
Unlike the prior work (Bai et al., 2020) where the anchor code is obtained by a few instances with the same semantics, we proposed the pharos code, which preserves the semantic similarity with relevant samples and dissimilarity with irrelevant samples. Moreover, we use the proven mathematical formula (i.e., PGM) to instantly calculate the pharos code before the adversarial attack, instead of learning the prototype codes (Wang et al., 2021b;a) through time-consuming neural networks as ProS-GAN and THA do. Hence, our pharos code is more suited for efficient adversarial robustness evaluation of deep hashing models.
Adversarial Training. Adversarial training (Goodfellow et al., 2015; Madry et al., 2017) aims to augment the training data with generated adversarial examples, which is the most robust training strategy against various adversarial attacks. Thus, modifications (Zhang et al., 2019; Wong et al., 2020; Pang et al., 2021) and applications (Li et al., 2021; Utrera et al., 2021) of adversarial training have emerged to improve the robustness and generalization of DNNs. For deep hashing-based retrieval, (Wang et al., 2021a) proposed the first effective adversarial training algorithm based on the targeted attack (dubbed ATRDH here) by narrowing the semantic gap between the adversarial samples and the original samples in the Hamming space.
3 METHOD
3.1 PRELIMINARIES
We consider that an attacked hashing model F learns from a training set of N data points O = {(xi,yi)}Ni=1, where xi indicates i-th image, and yi = [yi1, yi2, ..., yiC ] ∈ {0, 1}C denotes a label vector of xi. C indicates the total number of classes in the dataset. yij = 1 means that xi belongs to the j-th class. If xi and xj share at least one common label, they are semantically similar, i.e., xj is the positive sample of xi. Otherwise, they are semantically dissimilar and xj is the negative sample of xi.
Deep hashing aims at employing DNNs to transform high dimensional data into compact binary codes and simultaneously preserves their semantic similarities. For the given hashing model F , the
hash code bi of the instance xi is generated as:
bi = F (xi) = sign(hi) = sign(fθ(xi)), s.t. bi ∈ {−1, 1}K , (1)
where K represents the hash code length, and f(·) with parameter θ is a DNN to approximate hash function F (·). The final binary code bi is obtained by applying the sign(·) on the output hi of fθ(xi). Typically, f(·) is implemented by a convolutional neural network (CNN) and adopts the tanh activation to simulate sign function at the output layer.
3.2 THE PROPOSED PHAROS-GUIDED ATTACK
3.2.1 PROBLEM FORMULATION
In hashing based retrieval, the goal of adversarial attack (i.e., non-targeted attack) is to craft an adversarial example whose retrieval results are irrelevant to the original sample contents. For credibility, this objective can be achieved by maximizing the hash code distance between the adversarial example and its semantically relevant samples, and simultaneously minimizing the distance from irrelevant samples, rather than the only benign sample. Thus, for a given clean image x, the objective of its adversarial example x′ is formulated as follows:
max x′ Np∑ i Nn∑ j [wiD(F (x ′), F (x (p) i ))− wjD(F (x ′), F (x (n) j ))], s.t. ∥x− x ′∥p ≤ ϵ, (2)
where F (·) is the hashing function approximated by the deep model f(·), and D(·, ·) is a distance metric. wi and wj represent distance weights. x (p) i is a positive sample semantically related to the original sample x, and x(n)i is a negative sample of x. Because this maximizing term of Eq. (2) can push the hash code of the adversarial example close to those of unrelated samples and away from semantically relevant samples, optimal attack strength would come true in theory. Np and Nn are the number of the positive samples and the negative samples, respectively. ∥ · ∥p (p = 1, 2,∞) is Lp norm which keeps the pixel difference between the adversarial sample and the original sample no more than ϵ for the imperceptible property of adversarial perturbations.
3.2.2 GENERATION OF PHAROS CODES.
Actually, the maximized objective in Eq. (2) is equivalent to finding a hash code b′, which satisfies:
max b′ ∑ i ∑ j [wiDH(b ′, b (p) i )− wjDH(b ′, b (n) j )], (3)
where DH is Hamming distance measure. b (p) i is the hash code of the positive sample x (p) i , and b (n) j is the binary code of the negative sample x (n) j . Subsequently, we can optimize the adversarial example by minimizing the distance from b′, i.e.,
min x′
DH(b ′, F (x′)). (4)
0 For any hash code b̂ and b̌, we know that DH(b̂, b̌) = 12 (K − b̂ ⊤b̌). Accordingly, we deduce that DH(b̂, b̌) = K −DH(−b̂, b̌). Let hash code b⋆ = −b′, the Eq. (3) and (4) can be reformulated as follows:
min b⋆ ∑ i ∑ j {wi[DH(b⋆, b(p)i )−K]− wj [DH(b ⋆, b (n) j )−K]},
max x′
DH(b ⋆, F (x′))−K.
(5)
Removing the constants, the Eq. (5) can be written as follows:
min b⋆ ∑ i ∑ j [wiDH(b ⋆, b (p) i )− wjDH(b ⋆, b (n) j )],
max x′
DH(b ⋆, F (x′)).
(6)
Due to the binary characteristic of the hash code, we can directly calculate the optimal code (named pharos code b⋆) in the problem (6) by the following Pharos Generation Method (PGM), i.e.,
b⋆ = sign( Np∑ i Nn∑ j (wib (p) i − wjb (n) j )), (7)
where sign(·) is the sign function. The proof of PGM is shown in the Appendix A.1. In addition, we define the wi and wj as follows:
wi = si, wj = 1− sj , (8) where si/j (si/j ∈ [0, 1]) denotes the similarity between the adversarial example and the i/j-th benign sample. If labels yi and yj of xi and xj are given, we can calculate si/j by Dice coefficient, i.e., si/j = 2|y∩yi/j | |y|+|yi/j |
. Otherwise, si/j is usually determined by the optimization objective of the attacked hashing model. For instance, si = 1 and sj = 0 are widely adopted in learning to hash.
3.2.3 GENERATING ADVERSARIAL EXAMPLES.
Since the pharos code is found, the attack problem described in Eq. (2) can be translated into the following objective under the L∞ constraint:
max x′
DH(b ⋆, F (x′)), s.t. ∥x− x′∥∞ ≤ ϵ . (9)
According to DH(b̂, b̌) = 12 (K − b̂ ⊤b̌), the Eq. (9) is equivalent to:
max x′
La = − 1
K (b⋆)⊤fθ(x ′), s.t. ∥x− x′∥∞ ≤ ϵ . (10)
However, Eq. (10) focuses on the sum similarity between fθ(x′) and b⋆, and can not effectively promote that each bit in fθ(x′) differs in sign from b⋆. Intuitively, bits with small differences between fθ(x′) and b⋆ should be given big weights. Hence, we add an weighting vector ω on La to enforce each bit of fθ(x′) away from those bits of b⋆. Formally,
La = − 1
K ω⊤u, u = b⋆ ◦ fθ(x′), (11)
where ◦ represents Hadamard product, and ω has the same dimensions as b⋆. For efficiency, it is not necessary to enforce each element uk of u to approximate −1. In this case, we set uk to converge at t (−1 < t < 0). Thus, the component ωk of ω is defined as
ωk = { uk − 2t, uk > t −t2, otherwise (12)
where uk is the k-th element of u. As shown in Figure 2, t controls the margin between b⋆k and fθ(x
′)k. t is set to −0.8 by default. Furthermore, we follow (Yang et al., 2020) to make our pharos-guided attack focus on different sign between f(x′) and b⋆ for efficiency, i.e.,
La = − 1
π [m ◦ ω]⊤u, u = b⋆ ◦ fθ(x′) (13)
where m ∈ {0, 1}K is a mask and π is the number of non-zero elements in m. The elementmk of m is defined as
mk = { 1, uk > t 0, otherwise (14)
Notably, the pharos-guided attack with Eq. (10) is called PgA†, and that with Eq. (13) is the default PgA. Unlike HAG and SDHA using SGD (Robbins & Monro, 1951) or Adam (Kingma & Ba, 2015) optimizer (Yang et al., 2020; Lu et al., 2021) with quantities of iterations, this paper adopts PGD (Madry et al., 2017) to optimize x′ with T (T = 100 by default) iterations for efficiency, i.e.,
x′T = Sϵ(x′T−1 + η · sign(∆x′T−1La)), x ′ 0 = x+ r, (15)
where η is the step size, and Sϵ project x′ into the ϵ-ball (Madry et al., 2017) of x. r is random noise, sampled from uniform U(−ϵ, ϵ).
4 EXPERIMENTS
4.1 EXPERIMENTAL SETUP
Datasets. We adopt three popular datasets used in hashing based retrieval to evaluate our defense method in extensive experiments: FLICKR-25K (Huiskes & Lew, 2008), NUS-WIDE (Chua et al., 2009) and MS-COCO (Lin et al., 2014). The FLICKR-25K dataset comprises 25,000 Flickr images with 38 labels. We sample 1,000 images as the query set and the remaining regarded as the database, following (Wang et al., 2021a). Moreover, we randomly select 5,000 instances from the database to train hashing models. The NUS-WIDE dataset contains 269,648 images annotated with 81 concepts. We sample a subset with 21 of the most popular concepts, which consists of 195,834 images. 2,100 images are sampled from the subset as queries, while the rest images are regarded as the database. We randomly select 10,500 images from the database for the training set (Wang et al., 2021b). The MS-COCO dataset consists of 82,783 training samples and 40,504 validation samples, where each instance is annotated with at least one of the 80 categories. After combining the training and the validation set, we randomly pick 5,000 instances from them as queries and the rest as a database. For the training set, 10,000 images are randomly selected from the database. In addition, we make an extra experiment on CIFAR-10 (refer to A.2).
Protocols. To evaluate the performance of PgA, we conduct experiments on the standard metric, i.e., Mean Average Precision (MAP) (Yang et al., 2020), and Precision-Recall (PR) curve. Following (Bai et al., 2020), we calculate MAP values on the top 5,000 results from the database.
Baselines. Following (Yang et al., 2020; Bai et al., 2020), we adopt DPH as the default attacked hashing method, which is a generic algorithm in deep hashing-based retrieval. AlexNet (Krizhevsky et al., 2012) is selected as the default backbone network to implement hashing models on FLICKR25K, NUS-WIDE, and MS-COCO. We also evaluate the attack performance of our method against the defense model trained by ATRDH (Wang et al., 2021a) (the only adversarial training algorithm in deep hashing). We compare the proposed algorithm with multiple hashing attack methods, including P2P, DHTA, ProS-GAN, THA, HAG, and SDHA. For targeted attacks, we randomly select a label as the target label which does not share the same category as the true label. Other details of these methods are consistent with the original literature.
Implementation Details. We use stochastic gradient descent (SGD) for target hashing models with the initial learning rate of 0.01 and the momentum 0.9 as optimizers. We fix the mini-batch size of images as 32 and the weight decay parameter as 5× 10−4. All images are resized to 224× 224 and normalized in [0, 1] before feeding into hashing models. For the proposed attack method PgA, we adopt PGD (Bai et al., 2020) to optimize adversarial examples. The step size η and the number of iterations T are set to 1/255 and 100, respectively. The perturbation budget ϵ is set to 8/255. All codes are based on PyTorch 1.12 and are executed on NVIDIA RTX 3090 GPUs.
4.2 ATTACK RESULTS
Table 1 and Table 2 present the attack performance (MAP) of different attack methods on original deep hashing networks without defense and adversarially trained models, respectively. The lower the MAP value, the stronger the attack performance. The ”Clean” in these tables is to query with benign images, so MAP values refer to the original retrieval performance of the hashing model without attack. From Table 1, we can observe that the proposed method can greatly reduce the MAP values on three datasets with the hash bits varying from 16 to 64 and outperforms all other attacks. Compared to DHTA (Bai et al., 2020), the strongest targeted attack in Table 1, our PgA achieves average boosts of 18.68%, 14.66%, and 9.03% for tested bits on FLICKR-25K, NUS-WIDE, and MS-COCO, respectively. Moreover, our method outperforms it by an average of 4.40%, 4.09%, and 2.59% on three datasets, compared with the state-of-the-art non-targeted attack, SDHA. As for the defense model trained by ATRDH (the only adversarial training algorithm in deep hashing), Table 2 shows that all the MAP values of PgA are lower than other attack methods. Even in the face of SDHA, the proposed PgA brings an average improvement of 7.70%, 5.39%, and 6.02% for FLICKR-25K, NUS-WIDE, and MS-COCO, respectively. The superior performance of our method owes to the superiority of the pharos code, which considers the positive and negative samples simultaneously. In contrast, HAG and SDHA merely use the information from benign and positive samples, respectively. Thus, pharos code-based PgA is better than the previous state-of-the-arts.
For a more comprehensive comparison, the PR curves of different methods on three datasets with 32 bits length are shown in Fig. 3 and 4. We can see that the curves of our method are always below all others, demonstrating that our algorithm can attack hashing models more effectively.
In addition, we compare the attack performance of PgA with that of PgA† to evaluate its effectiveness. The results in Table 1 and 2 shows that PgA with Eq. (13) is better than PgA† with Eq. (10) in most cases. Hence, the accelerated operation in Eq. (13) can effectively improve the attack effect.
4.3 EFFICIENCY ANALYSIS
To confirm the high efficiency of the proposed method, we record the MAP and time of various attack methods, where the time denotes the average interval (second per image) to conduct attacks for the test set. It is noted that this time usually includes the training time of the attack model, e.g., ProS-GAN, and THA. The results are summarized in Table 3 and 4. It is observed that our PgA achieves the strongest attack performance and the shortest attack time for all datasets. Tables 3 and 4 are similar in terms of time results, and here we mainly focus on Table 3 for our analysis. Specifically, ProS-GAN has the lowest attack efficiency because it requires a few hours to train a generative network for attack. ProS-GAN takes about 127, 125, and 52 times longer than PgA on FLICKR-25K, NUS-WIDE, and MS-COCO, respectively. Moreover, P2P, DHTA, and HAG have similar attack time and they are much faster than ProS-GAN. Nevertheless, since they require 2000 iterations for gradients, they are still more than 13 times slower than our PgA. In summary, PgA not only outperform all the previous methods in attack performance but also can produce adversarial examples with the fastest speed.
To further verify the high efficiency of our PgA in the same setting, we use PGD-20 to optimize adversarial perturbations for all attack methods, as shown in Table 5. PgA has the same speed as P2P, DHTA, and HAG, because they can directly calculate the target code to guide the generation of adversarial samples extremely fast. However, THA costs a lot of time to learn to construct the target code with a fully-connected network, so it is much slower than PgA. Furthermore, SDHA is less efficient than PgA because of its complex objective function (Lu et al., 2021).
4.4 ANALYSIS ON HYPER-PARAMETERS
Effect of T & Efficiency. Figure 5(a) and 5(b) present attack performance (MAP) with different attack iterations (i.e., T ) of PGD. Overall, the MAP values decrease with increasing T . When T is greater than 20, the attack performance tends to level off for DPH, and 50 for ATRDH. For the same iterations, the attack performance of PgA maintains a large gap with other methods. Therefore, PgA is an efficient tool for evaluating deep hashing networks’ robustness.
Effect of t. Figure 5(c) illustrates the effect of hyper-parameter t on the attack performance. For the DPH model without defense, there is no appreciable change in MAP for different t. For ATRDH, the attack performance shows a small decrease as t increases. Although attack performance is not extremely sensitive to t, picking an appropriate value of t can not be ignored.
4.5 UNIVERSALITY ON DIFFERENT HASHING METHODS
We argue that the proposed attack algorithm is generic to most popular hashing models. To verify this point, we conduct adversarial attacks on multiple hashing methods with 32-bit hash code length, including DPSH (Li et al., 2016), HashNet (Cao et al., 2017), DSDH (Li et al., 2017), DCH (Cao et al., 2018) and CSQ (Yuan et al., 2020), implemented by VGG11. The results are reported in Table 6. It can be seen from the table that our PgA is effective in fooling the illustrated hashing models with better attack performance than others. Firstly, when testing with hashing methods without defense, our PgA exceeds the previous state-of-the-art SDHA in all cases. Especially with DCH, there is a 7.31% gap between PgA and SDHA. Moreover, under the defense of ATRDH, PgA reduces the MAP of all hashing methods to significant minimums. Also, our PgA brings a 24.63% enhancement on the DCH model compared to the SDHA. Thus, the above phenomena demonstrate the universality of the proposed attack method, which can be utilized in most popular hashing algorithms. We make another further evaluation on the defense model trained with PgA, and please refer to A.3.
5 CONCLUSION
In this paper, we proposed the adversarial attack method (i.e., PgA) for efficiently evaluating the adversarial robustness of deep hashing-based retrieval. Specifically, we provided the PGM to fast obtain the pharos code as the optimal representative of the image semantics for the attack in deep hashing. Moreover, PgA took the pharos code as ”label” to guide the non-targeted attack, where the similarity between the pharos code and the hash code of adversarial example was minimized. Besides, we added adaptive weighting into the Hamming distance calculation, which further boosts the strength of PgA. Experiments showed that our algorithm performed state-of-the-art attack performance and efficiency compared to the previous attack methods in deep hashing-based retrieval.
A APPENDIX
A.1 PROOF OF PGM
Theorem pharos code b⋆ which satisfies Eq. (6) can be calculated by the Pharos Generation Method (PGM), i.e.,
b⋆ = arg min b⋆∈{−1,+1}K ∑ i ∑ j [wiDH(b ⋆, b (p) i )− wjDH(b ⋆, b (n) j )]
= sign Np∑ i Nn∑ j (wib (p) i − wjb (n) j ) . proof. We define the following function:
ψ(b) = ∑ i ∑ j [wiDH(b, b (p) i )− wjDH(b, b (n) j )].
As the pharos code b⋆ need to be the optimal solution of the minimizing objective, the above theorem is equivalent to prove the following inequality:
ψ(b) ≥ ψ(b⋆), ∀ b ∈ {−1,+1}K .
Let b = {b1, b2, ..., bK}, then we have
ψ(b) = ∑ i ∑ j [wi 1 2 (K − b⊤b(p)i )− wj 1 2 (K − b⊤b(n)j )]
=− 1 2 ∑ i ∑ j [wib ⊤b (p) i − wjb ⊤b (n) j ] + ξ
=− 1 2 ∑ i ∑ j [wi K∑ k=1 bkb (p) ik − wj K∑ k=1 bkb (n) jk ] + ξ
=− 1 2 ∑ i ∑ j
( K∑
k=1
wibkb (p) ik − K∑ k=1 wjbkb (n) jk
) + ξ
=− 1 2 ∑ i ∑ j K∑ k=1 bk(wib (p) ik − wjb (n) jk ) + ξ
=− 1 2 K∑ k=1 bk ∑ i ∑ j (wib (p) ik − wjb (n) jk ) + ξ,
where ξ is a constant. Similarly,
ψ(b⋆) = −1 2 K∑ k=1 b⋆k ∑ i ∑ j (wib (p) ik − wjb (n) jk ) + ξ.
Due to the nature of absolute value, we have
ψ(b) = −1 2 K∑ k=1 bk ∑ i ∑ j (wib (p) ik − wjb (n) jk ) + ξ
≥− 1 2 K∑ k=1 ∣∣∣∣∣∣bk ∑ i ∑ j (wib (p) ik − wjb (n) jk ) ∣∣∣∣∣∣+ ξ =− 1
2 K∑ k=1 |bk| ∣∣∣∣∣∣ ∑ i ∑ j (wib (p) ik − wjb (n) jk ) ∣∣∣∣∣∣+ ξ =− 1
2 K∑ k=1 ∣∣∣∣∣∣ ∑ i ∑ j (wib (p) ik − wjb (n) jk ) ∣∣∣∣∣∣+ ξ =− 1
2 K∑ k=1 sign( ∑ i ∑ j (wib (p) ik − wjb (n) jk )) ∑ i ∑ j (wib (p) ik − wjb (n) jk ) + ξ
=− 1 2 K∑ k=1 b⋆k ∑ i ∑ j (wib (p) ik − wjb (n) jk ) + ξ
=ψ(b⋆).
That is, ψ(b) ≥ ψ(b⋆). Hence, the Theorem is proved.
A.2 ATTACK RESULTS ON CIFAR-10
Table 7 shows the results of the hashing attack methods on the single-label dataset CIFAR-10 (Cao et al., 2017). We can observe that our PgA is a little bit better than the state-of-the-art SDHA for DPH. However, the proposed PgA outperforms HAG and SDHA over 2.23%. Especially under the case of 64 bits, PgA brings an boost of 4.05% and 10.19% compared to HAG and SDHA, respectively.
A.3 ADVERSARIAL TRAINING
We use the generated adversarial samples for adversarial training to verify whether the proposed method is still valid. The objective of the adversarial training is formulated as follows:
min θ Ladv = Lori − N∑ i=1 1 K (b⋆i ) ⊤fθ(x ′ i), (16)
where b⋆i is the pharos code of the instance xi, and x ′ i is the adversarial example of xi. The latter term in Eq. (16) can rebuild similarity between the adversarial sample and the true semantics. Lori is the original loss function of the deep hashing model, which ensures the basic performance of hashing learning. The experimental results are illustrated in Table 8. The adversarial training does improve the defense capability of the deep hashing model, but our attack method is still valid and significantly outperforms the other methods. | 1. What is the focus and contribution of the paper on fast adversarial attack strategies?
2. What are the strengths of the proposed algorithm regarding its novelty and improvement over existing methods?
3. What are the weaknesses of the paper, particularly regarding its similarity to prior works and inconsistencies in experimental results?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a fast adversarial attack strategy based on adversarial training for hashing-based image retrieval. The authors generate some hashing codes as the semantic representation of the original image and attack the training process with a simple calculation from that code.
Strengths And Weaknesses
Strength: The hashing based image retrieval has been explored for a long time. But the methods that utilize adversarial attacks to boost the robustness of hashing retrieval just become popular and activated in recent years. The proposed algorithm contains some novelty in the improvement of the existing adversarial hashing attacks.
Weakness:
Please explain the relationship between this paper and Wang et al. [1] since section 3 is exactly the same as this paper which may cause Ethics problems.
The paragraph after Eq. (8) is weird because the authors never mention s _{i/j} before.
The experimental results on baseline methods are not consistent between this paper and Wang et al. [1] under the exact same training settings. Also, the reference results illustrated in Figure 4 are different from the corresponding figure in Wang et al. [1] which makes the experimental results inconvincible.
[1] wang et al. "Centralized Adversarial Learning for Robust Deep Hashing" arXiv preprint arXiv:2204.10779 (2022).
Clarity, Quality, Novelty And Reproducibility
Some part of this paper is uncleared and weird. The overall writing quality is average. And this paper is reproducible. |
ICLR | Title
Transformer-based Transform Coding
Abstract
Neural data compression based on nonlinear transform coding has made great progress over the last few years, mainly due to improvements in prior models, quantization methods and nonlinear transforms. A general trend in many recent works pushing the limit of rate-distortion performance is to use ever more expensive prior models that can lead to prohibitively slow decoding. Instead, we focus on more expressive transforms that result in a better rate-distortioncomputation trade-off. Specifically, we show that nonlinear transforms built on Swin-transformers can achieve better compression efficiency than transforms built on convolutional neural networks (ConvNets), while requiring fewer parameters and shorter decoding time. Paired with a compute-efficient Channel-wise AutoRegressive Model prior, our SwinT-ChARM model outperforms VTM-12.1 by 3.68% in BD-rate on Kodak with comparable decoding speed. In P-frame video compression setting, we are able to outperform the popular ConvNet-based scalespace-flow model by 12.35% in BD-rate on UVG. We provide model scaling studies to verify the computational efficiency of the proposed solutions and conduct several analyses to reveal the source of coding gain of transformers over ConvNets, including better spatial decorrelation, flexible effective receptive field, and more localized response of latent pixels during progressive decoding.
1 INTRODUCTION
Transform coding (Goyal, 2001) is the dominant paradigm for compression of multi-media signals, and serves as the technical foundation for many successful coding standards such as JPEG, AAC, and HEVC/VVC. Codecs based on transform coding divide the task of lossy compression into three modularized components: transform, quantization, and entropy coding. All three components can be enhanced by deep neural networks: autoencoder networks are adopted as flexible nonlinear transforms, deep generative models are used as powerful learnable entropy models, and various differentiable quantization schemes are proposed to aid end-to-end training. Thanks to these advancements, we have seen rapid progress in the domain of image and video compression. Particularly, the hyperprior line of work (Ballé et al., 2018; Minnen et al., 2018; Lee et al., 2019; Agustsson et al., 2020; Minnen & Singh, 2020) has led to steady progress of neural compression performance over the past two years, reaching or even surpassing state-of-the-art traditional codecs. For example, in image compression, BPG444 was surpassed by a neural codec in 2018 (Minnen et al., 2018), and (Cheng et al., 2020; Xie et al., 2021; Ma et al., 2021; Guo et al., 2021; Wu et al., 2020) have claimed on-par or better performance than VTM (a test model of the state-of-the-art non-learned VVC standard).
One general trend in the advancement of neural image compression schemes is to develop ever more expressive yet expensive prior models based on spatial context. However, the rate-distortion improvement from context based prior modeling often comes with a hefty price tag1 in terms of decoding complexity. Noteably, all existing works that claimed on-par or better performance than VTM (Cheng et al., 2020; Xie et al., 2021; Ma et al., 2021; Guo et al., 2021; Wu et al., 2020) rely on slow and expensive spatial context based prior models. ∗Equal contribution. †Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc. 1In the extreme case when a latent-pixel-level spatial autoregressive prior is used, decoding of a single 512x768 image requires no less than 1536 interleaved executions of prior model inference and entropy decoding (assuming the latent is downsampled by a factor of 16x16).
The development of nonlinear transforms, on the other hand, are largely overlooked. This leads us to the following questions: can we achieve the same performance as that of expensive prior models by designing a more expressive transform together with simple prior models? And if so, how much more complexity in the transform is required?
Interestingly, we show that by leveraging and adapting the recent development of vision transformers, not only can we build neural codecs with simple prior models that can outperform ones built on expensive spatial autoregressive priors, but do so with smaller transform complexity compared to its convolutional counterparts, attaining a strictly better ratedistortion-complexity trade-off. As can be seen in Figure 1, our proposed neural image codec SwinT-ChARM can outperform VTM-12.1 at comparable decoding time, which, to the best of our knowledge, is a first in the neural compression literature.
As main contributions, we 1) extend SwinTransformer (Liu et al., 2021) to a decoder setting and build Swin-transformer based neural image codecs that attain better rate-distortion performance with lower complexity compared with existing solutions, 2) verify its effectiveness in video compression by enhancing scalespace-flow, a popular neural P-frame codec,
and 3) conduct extensive analysis and ablation study to explore differences between convolution and transformers, and investigate potential source of coding gain.
2 BACKGROUND & RELATED WORK
Conv-Hyperprior The seminal hyperprior architecture (Ballé et al., 2018; Minnen et al., 2018) is a two-level hierarchical variational autoencoder, consisting of a pair of encoder/decoder ga, gs, and a pair of hyper-encoder/hyper-decoder ha, hs. Given an input image x, a pair of latent y = ga(x) and hyper-latent z = ha(y) is computed. The quantized hyper-latent ẑ = Q(z) is modeled and entropycoded with a learned factorized prior. The latent y is modeled with a factorized Gaussian distribution p(y|ẑ) = N (µ, diag(σ)) whose parameter is given by the hyper-decoder (µ,σ) = hs(ẑ). The quantized version of the latent ŷ = Q(y−µ)+µ is then entropy coded and passed through decoder gs to derive reconstructed image x̂ = gs(ŷ). The tranforms ga, gs, ha, hs are all parameterized as ConvNets (for details, see Appendix A.1).
Conv-ChARM (Minnen & Singh, 2020) extends the baseline hyperprior architecture with a channel-wise auto-regressive model (ChARM)2, in which latent y is split along channel dimension into S groups (denoted as y1, . . . ,yS), and the Gaussian prior p(ys|ẑ, ŷ<s) is made autoregressive across groups where the mean/scale of ys depends on quantized latent in the previous groups ŷ<s. In practice, S = 10 provides a good balance of performance and complexity and is adopted here.
Spatial AR models Most of recent performance advancements of neural image compression is driven by the use of spatial auto-regressive/context models. Variants include causal global prediction (Guo et al., 2021), 3D context (Ma et al., 2021), block-level context (Wu et al., 2020), nonlocal context (Li et al., 2020; Qian et al., 2021). One common issue with these designs is that decoding cannot be parallelized along spatial dimensions, leading to impractical3decoding latency, especially for large resolution images.
2For details refer to Figure 11 and 12 in the Appendix. 3It is reported in (Wu et al., 2020)(Table I) that decoding time of spatial autoregressive models on a 512x768
image range from 2.6s to more than half a minute, depending on the specific designs. Also see Figure 1.
ConvNet-based transforms While the design space of prior models is extensively explored, nonlinear transforms, as an important component, have received less attention. A standard convolution encoder-decoder with GDN (Ballé et al., 2016; 2017) as activation is widely adopted in the literature. Later works introduce new transform designs, such as residual blocks with smaller kernels (Cheng et al., 2020), nonlocal (sigmoid gating) layers (Zhou et al., 2019; Chen et al., 2021), invertible neural networks (Xie et al., 2021), and PReLU as an efficient replacement of GDN (Egilmez et al., 2021).
Vision transformers Although many transform networks are proposed, they are still mainly based on ConvNets. Recently transformers (Vaswani et al., 2017) have been introduced to the vision domain and have shown performance competitive with ConvNets in many tasks, e.g. object detection (Carion et al., 2020), classification (Dosovitskiy et al., 2021), image enhancement (Chen et al., 2020), and semantic segmentation (Zheng et al., 2021). Inspired by their success, in this work we explore how vision transformers work as nonlinear transforms for image and video compression.
3 SWIN-TRANSFORMER BASED TRANSFORM CODING
Among the large number of vision transformer variants, we choose Swin Transformer (Liu et al., 2021) (hereafter referred to as SwinT) to build the nonlinear transforms, mainly because of 1) its linear complexity w.r.t. input resolution due to local window attention, and 2) its flexibility in handling varying input resolutions at test time, enabled by relative position bias and hierarchical architecture.
3.1 SWINT ENCODER AND DECODER
The original SwinT is proposed as a vision backbone, i.e. an encoder transform with downsampling. As shown in Figure 2, the SwinT encoder ga contains SwinT blocks interleaved with Patch Merge blocks. The Patch Merge block contains Space-to-Depth (for downsampling), LayerNorm, and Linear layers sequentially. SwinT block performs local self-attention within each non-overlapping window of the feature maps and preserves feature size. Consecutive SwinT blocks at the same feature size shift the window partitioning with respect to the previous block to promote information propagation across nearby windows in the previous block.
We adopt SwinT encoder as the encoder transform ga in our model, and extend it to SwinT decoder gs by reversing the order of blocks in ga, and replacing the Patch Merge block with a Patch Split block, which contains Linear, LayerNorm, Depth-to-Space (for upsampling) layers in sequence. The architectures for hyper transforms ha, hs are similar to ga, gs with different configurations.
4The ChARM architecture (Minnen & Singh, 2020) is detailed in Figure 12 of Appendix A.1.
With these four SwinT transforms, we propose two image compression models, SwinT-Hyperprior and SwinT-ChARM, whose prior and hyper prior models are respectively the same as in ConvHyperprior and Conv-ChARM introduced in Section 2. The full model architectures are shown in Figure 2 and Figure 13.
3.2 EXTENSION TO P-FRAME COMPRESSION
To investigate the effectiveness of SwinT transforms for video compression, we study one popular P-frame compression model called Scale-Space Flow (SSF) (Agustsson et al., 2020). There are three instances of Conv-Hyperprior in SSF, which are respectively for compressing I-frames, scale-space flow and residual. We propose a SwinT variant, referred to as SwinT-SSF, which is obtained by replacing Conv transforms ga, gs in the flow codec and residual codec of SSF with SwinT tranforms. To stabilize training of flow codec in SwinT-SSF, we need to remove all LayerNorm layers and reduce the window size (e.g. from 8 to 4). The baseline SSF model will be referred as Conv-SSF. Even though we build our solution on top of SSF, we believe this general extension can be applied to other ConvNet-based video compression models (Rippel et al., 2021; Hu et al., 2021) as well.
4 EXPERIMENTS AND ANALYSIS
4.1 EXPERIMENT SETUP
Training All image compression models are trained on the CLIC2020 training set. ConvHyperprior and SwinT-Hyperprior are trained with 2M batches. Conv-ChARM and SwinT-ChARM are trained with 3.5M and 3.1M steps. Each batch contains 8 random 256× 256 crops from training images. Training loss L = D + βR is a weighted combination of distortion D and bitrate R, with β being the Lagrangian multiplier steering rate-distortion trade-off. Distortion D is MSE in RGB color space. To cover a wide range of rate and distortion, for each solution, we train 5 models with β ∈ {0.003, 0.001, 0.0003, 0.0001, 0.00003}. The detailed training schedule is in Appendix B.1. For P-frame compression models, we follow the training setup of SSF. Both Conv-SSF and SwinTSSF are trained on Vimeo-90k Dataset (Xue et al., 2019) for 1M steps with learning rate 10−4, batch size of 8, crop size of 256×256, followed by 50K steps of training with learning rate 10−5 and crop size 384× 256. The models are trained with 8 β values 2γ × 10−4 : γ ∈ {0, 1, ..., 7}. We adopt one critical trick to stablize the training from (Jaegle et al., 2021; Meister et al., 2018), i.e. to forward each video sequence twice during one optimization step (mini-batch), once in the original frame order, once in the reversed frame order. Finally we add flow loss5 only between 0 and 200K steps, which we found not critical for stable training but improves the RD.
Evaluation We evaluate image compression models on 4 datasets: Kodak (Kodak, 1999), CLIC2021 testset (CLIC, 2021), Tecnick testset (Asuni & Giachetti, 2014), and JPEG-AI testset (JPEG-AI, 2020). We use BPG and VTM-12.1 to code the images in YUV444 mode, and then calculate PSNR in RGB. For a fair comparison all images are cropped to multiples of 256 to avoid padding for neural codecs.
We evaluated P-frame models on UVG (Mercat et al., 2020) 6 and MCL-JCV (Wang et al., 2016), and compare them with the test model implementation of HEVC, referred to as HEVC (HM), and open source library implementation of HEVC, refered to as HEVC (x265). To align configuration, all video codecs are evaluated in low-delay-P model with a fixed GOP size of 12.
Besides rate-distortion curves, we also evaluate different models using BD-rate (Tan et al., 2016), which represents the average bitrate savings for the same reconstruction quality. For image codecs, BD-rate is computed for each image and then averaged across all images; for video codecs, BD-rate is computed for each video and then averaged across all videos. More details on testset preprocessing, and traditional codecs configurations can be found in Appendix B.2.
5We did not observe RD improvement when applying flow loss to Conv-SSF training. 6We use the original 7 UVG sequences that are commonly used in other works (Agustsson et al., 2020).
4.2 RESULTS
RD and BD-rate for image codecs The RD curves for all compared image codecs evaluated on Kodak are shown in Figure 3a, and the relative rate reduction of each codec compared to VTM-12.1 at a range of PSNR levels is shown in Figure 3b 7.
As can be seen from Figure 3, SwinT transform consistently outperforms its convolutional counterpart; the RD-performance of SwinT-Hyperprior is on-par with Conv-ChARM, despite the simpler prior; SwinT-ChARM outperforms VTM-12.1 across a wide PSNR range. In the Appendix (Figure 28 and Figure 30), we further incorporate the results from existing literature known to us for a complete comparison. Particularly, our Conv-Hyperprior is much better than the results reported in (Minnen et al., 2018) (no context), and Conv-ChARM is on par with (Minnen & Singh, 2020).
In Table 1, we summarize the BD-rate of image codecs across all four dataset with VTM-12.1 as anchor. On average SwinT-ChARM is able to achieve 3.8% rate reduction compared to VTM12.1. The relative gain from Conv-Hyperprior to SwinT-Hyperprior is on-average 12% and that from Conv-ChARM to SwinT-ChARM is on-average 9%. Further gain over VTM-12.1 can be obtained by test-time latent optimization (Campos et al., 2019) or full model instance adaptation (van Rozendaal et al., 2021), which are out of the scope of this work.
7The relative rate-saving curves in Figure 3b is generated by first interpolating the discrete RD points (averaged across the testset) with a cubic spline, and then compare bitrate of different models at fixed PSNR.
8RD plot for the other three datasets can be found in Appendix (Figure 14, Figure 15, and Figure 16)
RD and BD-rate for video codecs For P-frame compression, we evaluated SwinT-SSF on UVG and MCL-JCV, with RD comparison shown in Figure 4. Again, SwinT transform leads to consistently better RD. Table 2 summarizes BD-rate with our reproduced Conv-SSF model as anchor. We can see that SwinT-SSF achieves an average of 11% rate saving over Conv-SSF. Additionally, we show that if SwinT transform is only applied to residual-autoencoder (labeled as SwinT-SSF-Res), it can only get about 4.6% gain, which indicates
that both flow and residual compression benefit from SwinT as encoder and decoder transforms. Note that SwinT-SSF still lags behind HM, suggesting lots of room for improvement in neural video compression. For per-video breakdown of BD-rate, see Figure 18 and Figure 17 in the Appendix.
Decoding complexity We evaluate the decoding complexity of 4 image codecs on 100 images of size 768 × 512 and show the metrics in Table 3, including decoding time, GMACs and GPU peak memory during decoding and total model parameters. The models run with PyTorch 1.9.0 on a workstation with one RTX 2080 Ti GPU. From the table, the inference time of SwinT decoder is less than that of Conv decoder. The entropy decoding time of ChARM prior is about twice than the factorized prior. The total decoding time of SwinT-based models is less than Conv-based models. In ablation study A5, we show a smaller SwinT-Hyperprior with 20.6M parameters has almost the same RD as the SwinT-Hyperprior profiled here. For details on encoding complexity, profiling setup, scaling to image resolution, please refer to Table 4 and Section D.3 in the Appendix.
Scaling behavior To see how the BD-rate varies with model size, we scale SwinTHyperprior and Conv-Hyperprior to be twice or half of the size of the base models (i.e. medium size)9. The result is shown in Figure 5. For both types of models, as we reduce the base model size, there is a sharp drop in performance, while doubling model size only leads to marginal gain. Noticeably, SwinT-Hyperpriorsmall is on-par with Conv-Hyperprior-medium even with half of the parameters, and SwinT transforms in general incur fewer MACs per parameter.
In Figure 1, we further consolidate the decoding latency and scaling behavior study into a single plot and show that SwinT-ChARM runs at comparable speed as VTM-12.1 while achiev-
ing better performance,10 as opposed to state-of-the-art neural codecs with spatial autoregressive prior that decodes orders of magnitude slower.
4.3 ANALYSIS
Latent correlation One of the motivating principles of transform coding is that simple coding can be made more effective in the transform domain than in the original signal space (Goyal, 2001; Ballé et al., 2021). A desirable transform would decorrelate the source signal so that simple scalar quantization and factorized entropy model can be applied without constraining coding performance. In most mature neural compression solutions, uniform scalar quantization is adopted together with a learned factorized or conditionally factorized Gaussian prior distribution. It is critical, then, to effectively factorize and Gaussianize the source distribution so that coding overhead can be minimized.
Specifically, in hyperprior based models (Ballé et al., 2018), ȳ , (y − µ)/σ is modeled as a standard spherical normal vector. The effectiveness of the analysis transform ga can then be evaluated by measuring how much correlation there is among different elements in ȳ. We are particularly interested in measuring the correlation between nearby spatial positions, which are heavily correlated in the source domain for natural images. In Figure 6, we visualize the normalized spatial correlation of ȳ averaged over all latent channels, and compare Conv-Hyperprior with SwinT-Hyperprior at β = 0.001. It can be observed that while both lead to small cross-correlations, Swin-Transformer does a much better job with uniformly smaller correlation values, and the observation is consistent
9detailed model configurations are provided in Appendix A.3. 10Note that it is difficult to fairly compare the decoding time of VTM and neural codecs since they run on different hardware. For more discussion please refer to Appendix D.3. 11The value with index (i, j) corresponds to the normalized cross-correlation of latents at spatial location (w, h) and (w + i, h+ j), averaged across all latent elements of all images on Kodak.
with other β values, which are provided in Figure 20 in the Appendix. This suggests that transformer based transforms incur less redundancy across different spatial latent locations compared with convolutional ones, leading to an overall better rate-distortion trade-off. The larger spatial correlation (and thus redundancy) in Conv-Hyperprior also explains why a compute-heavy spatial auto-regressive model is often needed to improve RD with convolutional based transforms (Minnen et al., 2018; Lee et al., 2019; Ma et al., 2021; Guo et al., 2021; Wu et al., 2020). Figure 6 also reveals that most of the correlation of a latent comes from the four elements surrounding it. This suggests that a checkerboard-based conditional prior model (He et al., 2021) may yield further coding gain.
Effective receptive field Intra prediction in HEVC or AV1 only rely on left and top boarders of the current coding block (Sullivan et al., 2012; Chen et al., 2018), except for intra block copy for screen content (Xu et al., 2016). We would like to see how large the effective receptive field (ERF) (Luo et al., 2017) of SwinT encoders compared to Conv encoders. The theoretical receptive field of the encoders (ga, ha ◦ ga) in SwinT-based codecs is much larger than that of Conv-based codecs. However comparing Figure 7a with 7e and Figure 7b with 7f, the ERF of SwinT encoders after training is even smaller than Conv encoders. When we examine the ERF of the released Swin transformers for classification, detection and segmentation tasks, they are all spanning the whole input image. This contrast suggests that (natural) image compression with rate-distortion objective is a local task, even with transformer-based nonlinear transforms. We further look into P-frame compression models, particularly the ERF of two types of transforms in flow codec and residual codec, as shown in Figure 7d & 7h, and Figure 7c & 7g. Clearly for flow codec, SwinT transform has much larger ERF than the convolution counterpart. For residual codec, the ERF of SwinT transforms is similar to image (I-frame) compression case. This shows of flexibility of SwinT encoders to attend to longer or shorter range depending on the tasks. To get a better picture of the behavior of attention layers in SwinT transforms, we also show the attention distance in each layer in Figure 22.
Progressive decoding The ERF in the previous section shows the behavior of the encoder transforms, here we further investigate the decoder transforms through the lens of progressive decoding (Rippel et al., 2014; Minnen & Singh, 2020; Lu et al., 2021). Initialized with the prior mean, the input to the decoder is progressively updated with the dequantized latent ŷ in terms of coding units, leading to gradually improved reconstruction quality. For the latent with shape (C,H,W ), we consider three types of coding units, i.e. per channel (1, H,W ), per pixel (C, 1, 1), per element (1, 1, 1). The coding units are ordered by the sum of prior std of all elements within each unit. The RD curves of progressive decoding for SwinT-Hyperprior and Conv-Hyperprior are shown in Figure 8a, which closely follow each other when ordered by channel or element, but significantly apart when ordered by pixel (spatial dim). Particularly, we show an extreme case when the half pixels in the latent (masked by checkerboard pattern) are updated with dequantized values, corresponding to the two scatter points in Figure 8a. One visual example (CLIC2021 test) is shown in Figure 8b under
this setup, where we can clearly see SwinT decoder achieves better reconstruction quality than the Conv decoder, mainly in terms of more localized response to a single latent pixel. This is potentially useful for region-of-interest decompression. More visual examples are shown in Figure 26.
4.4 ABLATION STUDY
Relative position bias There are two sources of positional information in SwinT transforms, namely the Space-to-Depth modules and the additive relative position bias (RPB). Even when the RPB is removed, SwinT-Hyperprior still outperforms Conv-Hyperprior across all bitrates, which indicates image compression may not require accurate relative position.
Shifted window The original motivation of shifted window design is to promotes the inter-layer feature propagation across nonoverlapping windows. Image compression performance drops slightly when there is no shifted window at all. This further suggests image compression requires local information.
The details of ablations A3-A5 in Figure 9 can be found in Section F of the appendix.
5 CONCLUSION
In this work we propose Swin transformer based transforms for image and video compression. In the image compression setting, SwinT transform consistently outperforms its convolutional counterpart. Particularly, the proposed SwinT-ChARM model outperforms VTM-12.1 at comparable decoding speed, which, to the best of our knowledge, is the first in learning-based methods. We also show the effectiveness of SwinT transforms when extended to the P-frame compression setting. Compared with convolution transforms, SwinT transforms can spatially decorrelate the latent better, have more flexible receptive field to adapt to tasks that requires either short-range (image) and long-range (motion) information, and better progressive decoding of latent pixels. While pushing the neural image compression to a new level in terms of rate-distortion-computation trade-off, we believe it is only the starting point for developing more efficient transformer-based image and video codecs.
ACKNOWLEDGMENTS
We would like to thank Amir Said for developing entropy coding and great advice on data compression in general. We would also appreciate the helpful discussions from Reza Pourreza and Hoang Le, and draft reviews from Auke Wiggers and Johann Brehmer.
Appendix
Table of Contents
A Models 15
A.1 Convolution baselines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 A.2 Swin-Transformer based compression models . . . . . . . . . . . . . . . . . . . 15 A.3 Model configurations for model size scaling study . . . . . . . . . . . . . . . . 16
B Training and Evaluation 17
B.1 Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 B.2 Traditional codec evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
C BD rate computation 19
C.1 BD rate for image codec . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 C.2 BD rate for video codec . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
D More Results 21
D.1 Image compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 D.2 Video Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 D.3 Coding complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
E Analysis 24
E.1 Spatial correlation of latent . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 E.2 Effective Receptive Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 E.3 Rate distribution across latent channels . . . . . . . . . . . . . . . . . . . . . . 25 E.4 Centered kernel alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 E.5 Progressive decoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
F More ablation studies 26
A MODELS
A.1 CONVOLUTION BASELINES
Conv-Hyperprior and Conv-ChARM The architecture of Conv-Hyperprior and ConvChARM are shown in Figure 10 and Figure 11. For both architecture, our base model (i.e. medium size) has the following hyperparameters: (C1, C2, C3, C4, C5, C6, C7) = (320, 320, 320, 320, 192, 192, 192).
A.2 SWIN-TRANSFORMER BASED COMPRESSION MODELS
SwinT-Hyperprior, SwinT-ChARM For both SwinT-Hyperprior and SwinT-ChARM, we use the same configurations: (wg, wh) = (8, 4), (C1, C2, C3, C4, C5, C6) = (128, 192, 256, 320, 192, 192), (d1, d2, d3, d4, d5, d6) = (2, 2, 6, 2, 5, 1) where C, d, and w are defined in Figure 13 and Figure 2. The head dim is 32 for all attention layers in SwinT-based models.
SwinT-SSF For SwinT transforms used in SSF variant, the first Patch Merge block is with downsample rate of 4 and two other Patch Merge blocks with downsampling rate of 2. Thus the downsampling rate for the encoder is still 16, the same as the image compression models. There are only 3 transformer stages with depths 2, 4, 2. The embedding dim is 96. The number of latent and hyper latent channels are all 192. The window size is 4 for flow codec and 8 for residual codec.
SwinT-SSF-Res This is a variant where only residual autoencoder uses SwinT transforms. Same architecture as the residual autoencoder in SwinT-SSF.
A.3 MODEL CONFIGURATIONS FOR MODEL SIZE SCALING STUDY
A.3.1 SWINT-HYPERPRIOR
Set of model hyperparameters that are common to all experiments: (d1, d2, d3, d4, d5, d6) = (2, 2, 6, 2, 5, 1) (wg, wh) = (8, 4)
SwinT-Hyperprior (small) (C1, C2, C3, C4, C5, C6) = (96, 128, 160, 192, 96, 128)
SwinT-Hyperprior (medium) (C1, C2, C3, C4, C5, C6) = (128, 192, 256, 320, 192, 192)
SwinT-Hyperprior (large) (C1, C2, C3, C4, C5, C6) = (160, 256, 352, 448, 192, 256)
A.3.2 CONV-HYPERPRIOR
Conv-Hyperprior (small) (C1, C2, C3, C4, C5, C6, C7) = (192, 192, 192, 192, 128, 128, 128)
Conv-Hyperprior (medium) (C1, C2, C3, C4, C5, C6, C7) = (320, 320, 320, 320, 192, 192, 192)
Conv-Hyperprior (large) (C1, C2, C3, C4, C5, C6, C7) = (448, 448, 448, 448, 256, 256, 256)
B TRAINING AND EVALUATION
B.1 TRAINING
All image compression models are trained on CLIC2020 training set, which contains both professional and mobile training sets, in total 1,633 high resolution natural images. Conv-Hyperprior and SwinT-Hyperprior are trained with 2M batches. Each batch contains 8 patches of size 256 × 256 randomly cropped from the training images. Learning rate starts at 10−4 and is reduced to 10−5 at 1.8M step.
For Conv-ChARM, we first train a model Conv-ChARM at β = 0.0001 from scratch for 2M steps, with it as the starting point, we continue to train other beta values Conv-ChARM-β, β ∈ B for 1.5M steps. For SwinT-ChARM-β, we load the transform weights from the checkpoint at 2M step of the pretrained SwinT-Hyperprior-β, then finetune the transforms together with the random initialized ChARM prior for 1.1M steps. Learning rate starts at 10−4 and is reduced to 10−5 for the last 100K steps.
Training loss L = D + βR is a weighted combination of distortion D and bitrate R, with β being the Lagrangian multiplier steering rate-distortion trade-off. Distortion D is MSE in RGB color space. To cover a wide range of rate and distortion, for each solution, we train 5 models with β ∈ B = {0.003, 0.001, 0.0003, 0.0001, 0.00003}. Usually we need to train longer for the model with larger bitrates (i.e. smaller β) to converge. Particularly for the results presented in this paper, We train SwinT-Hyperprior-0.00003 for 2.5M steps instead of 2M steps for the other 4 lower bitrates.
For P-frame compression models, we follow the training setup of SSF. Both Conv-SSF and SwinTSSF are trained on Vimeo-90k Dataset (Xue et al., 2019) for 1M steps with learning rate 10−4, batch size of 8, crop size of 256 × 256, followed by 50K steps of training with learning rate 10−5 and crop size12 384×256. The models are trained with 8 β values 2γ×10−4 : γ ∈ {0, 1, ..., 7}. We adopt one critical trick to stablize the training from (Jaegle et al., 2021; Meister et al., 2018), i.e. to forward each video sequence twice during one optimization step (mini-batch), once in the original frame order, once in the reversed frame order. When this trick is used, we set the batch size to be 4 instead of 8. Finally we add flow loss only between 0 and 200K steps, which we found not critical for stable training but helps improve the RD.
For all model training, Adam optimizer is used without weighted decay. Training for 2M steps takes about 10 days and 14 days respectively for Conv-Hyperprior and SwinT-Hyperprior on a single Nvidia V100 GPU. Total training time is about 7.5 days on a single Nvidia V100 GPU.
For all models, we use mixed quantization during training (Minnen & Singh, 2020), i.e. adding uniform noise to the continuous latent before passing to the prior model, subtracting prior mean from the continuous latent followed by rounding before passing to the decoder transform.
12We did not use the crop size 384 × 384 during the second stage as in the original paper because the resolution of Vimeo dataset is 448 × 256. We found in our case increasing crop size from 256 × 256 to 384× 256 in the second stage does not improve RD.
B.2 TRADITIONAL CODEC EVALUATION
In this section, we provide evaluation script used to generate results for traditional codecs.
B.2.1 IMAGE CODECS
VTM-12.1: VTM-12.1 software is built from https://vcgit.hhi.fraunhofer.de/ jvet/VVCSoftware_VTM/-/tags/VTM-12.1 and we use the script from CompressAI (https://github.com/InterDigitalInc/CompressAI/tree/efc69ea24) for dataset evaluation. Specifically, the following command is issued to gather VTM-12.1 image compression evaluation results:
python -m compressai.utils.bench vtm [path to image folder] -c [path to VVCSoftware_VTM folder]/cfg/encoder_intra_vtm.cfg -b [path to VVCSoftware_VTM folder]/bin -q 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38, 40
BPG: BPG software is obtained from https://bellard.org/bpg/ and the following commands are used for encoding and decoding.
bpgenc -e x265 -q [0 to 51] -f 444 -o [encoded heic file] [original png file] bpgdec -o [decoded png file] [encoded heic file]
B.2.2 VIDEO CODECS
HEVC (x265)
ffmpeg -y -pix_fmt yuv420p -s [resolution] -r [frame-rate] -i [input yuv420 raw video] -c:v libx265 -preset medium -crf [9, 12, 15, 18, 21, 24, 27, 30] -tune zerolatency -x265-params "keyint=12:min-keyint=12:verbose=1" [output mkv file path]
AVC (x264)
ffmpeg -y -pix_fmt yuv420p -s [resolution] -r [frame-rate] -i [input yuv420 raw video] -c:v libx264 -preset medium -crf [9, 12, 15, 18, 21, 24, 27, 30] -tune zerolatency -x264-params "keyint=12:min-keyint=12:verbose=1" [output mkv file path]
HEVC (HM)
[Path to HM folder]/bin/TAppEncoderStatic -c [Path to HM folder]/cfg/encoder_lowdelay_P_main.cfg -i [input yuv raw video] --InputBitDepth=8 -wdt [width] -hgt [height] -fr [frame-rate] -f [number of frames] -o [output yuv video] -b [encoded bitstream bin file] -ip 12 -q [12, 17, 22, 27, 32, 37, 42]
C BD RATE COMPUTATION
def Bjontegaard_Delta_Rate( # rate and psnr in ascending order rate_ref, psnr_ref, # reference rate_new, psnr_new, # new result
): min_psnr = max(psnr_ref[0], psnr_new[0], 30)
max_psnr = min(psnr_ref[-1], psnr_new[-1], 44)
log_rate_ref = log(rate_ref) log_rate_new = log(rate_new)
spline_ref = scipy.interpolate.CubicSpline( psnr_ref, log_rate_ref, bc_type=’not-a-knot’, extrapolate=True,
) spline_new = scipy.interpolate.CubicSpline(
psnr_new, log_rate_new, bc_type=’not-a-knot’, extrapolate=True,
)
delta_log_rate = ( spline_new.integrate(min_psnr, max_psnr) - spline_ref.integrate(min_psnr, max_psnr)
)
delta_rate = exp(delta_log_rate / (max_psnr - min_psnr))
return 100 * (delta_rate - 1)
C.1 BD RATE FOR IMAGE CODEC
# Evaluate BD-rate on an image dataset bd_rates = list() for image in image_dataset:
# evaluate rate and psnr on reference and new codec # for this image with different qualities rate_ref, psnr_ref = ReferenceCodec(image, qp=[...]) rate_new, psnr_new = NewImageCodec(image, beta=[...]) bd_rates.append(
Bjontegaard_Delta_Rate( rate_ref, psnr_ref, rate_new, psnr_new,
) )
# BD is computed per image and then averaged bd_rate = bd_rates.mean()
C.2 BD RATE FOR VIDEO CODEC
# Evaluate BD-rate on a video dataset bd_rates = list() for video in video_dataset:
# evaluate rate and psnr on reference and new codec # for this video with different qualities rate_ref, psnr_ref = ReferenceCodec(video, qp=[...]) rate_new, psnr_new = NewVideoCodec(video, beta=[...]) bd_rates.append(
Bjontegaard_Delta_Rate( rate_ref, psnr_ref, rate_new, psnr_new,
) )
# BD is computed per video and then averaged bd_rate = bd_rates.mean()
D MORE RESULTS
D.1 IMAGE COMPRESSION
Additional rate-distortion results on CLIC2021, Tecnick, and JPEG-AI are provided in Figure 14, Figure 15, and Figure 16.
For a complete comparison of results from existing literatures, we provide a summary RD plot of all neural image codec solutions known to us in Figure 28 on Kodak. In Figure 29 and Figure 30, we plot the percentage rate saving with BPG444 and VTM-12.1 as reference, respectively.
D.2 VIDEO COMPRESSION
In Figure 17 and Figure 18, we provide performance comparison of Conv-SSF, SwinT-SSF, HEVC (x265), and AVC (x264) with per-video breakdown.
D.3 CODING COMPLEXITY
We evaluate the decoding complexity of all neural image codecs in terms of the time for network inference and entropy coding, peak GPU memory, model size, etc. We select 100 high resolution images from CLIC testset and center crop them to three resolutions (768× 512, 1280× 768, 1792×
1024) to see how those metrics scale with image size. Batch size is one for all model inference. We run the experiment on a local workstation with one RTX 2080 Ti GPU, with PyTorch 1.9.0 and Cuda toolkit 11.1. For bit-exact entropy coding, we need to use deterministic13 convolution. The neural networks run on the single GPU and entropy coding runs on CPU with 8 threads. We follow the standard protocols to measure the inference time and peak memory of neural nets, such as GPU warm up and synchronization. File open/close is excluded from coding time measurement. MACs, GPU peak memory, and model parameter count are profiled using get model profile function of deepspeed profiler14.
We show more details on the coding complexity of neural image codecs in Figure 19, particularly the linear scaling to image resolution of both SwinT-based and Conv-based models. The break-down of encoding complexity is shown in Table 4.
For completeness, we also report the profiling for CPU coding time in Table 5 and Table 6. The evaluation setup is the same as the GPU profiling case, except models are run on the CPU instead (same host machine with Intel(R) Xeon(R) W-2123 CPU @ 3.60GHz).
Table 7 reports encoding and decoding time of VTM-12.1 under different quantization parameters (QPs), evaluated on an Intel Core i9-9940 CPU @ 3.30GHz, averaged over 24 Kodak images. As can be seen from the table, decoding time of VTM-12.1 is a function of reconstruction quality, where longer decoding time is observed for higher quality reconstruction. In Figure 1, the reported VTM12.1 decoding speed corresponds to a QP value of 28, where the bpp value is similar to that obtained by models trained with β = 0.001. It is worth pointing out that VTM-12.1 encoding process is much slower, ranging anywhere from 1 to 5 minutes per image, whereas neural codec runs much faster.
13https://pytorch.org/docs/1.9.0/generated/torch.use_deterministic_ algorithms.html?highlight=deterministic#torch.use_deterministic_ algorithms
14https://www.deepspeed.ai/tutorials/flops-profiler/ #usage-outside-the-deepspeed-runtime
E ANALYSIS
E.1 SPATIAL CORRELATION OF LATENT
We visualize the spatial correlation map for Conv-Hyperprior and SwinT-Hyperprior at different β in Figure 20.
E.2 EFFECTIVE RECEPTIVE FIELD
See Figure 21 for the effective receptive field for the composed encoding tranforms ha ◦ ga and Figure 22 for the mean attention distance visualization of each head within each transformer layer.
E.3 RATE DISTRIBUTION ACROSS LATENT CHANNELS
It is generally believed that ConvNets learn to extract various features and store them in each channel of the activations. Here we look into the features in the latent channels which are to be quantized and entropy coded to bitstreams. Particularly we order the total bitrate of each channel averaged over Kodak dataset (24 768× 256 images). The result is shown in Figure 23. We find an interesting phenomenon across models under different bitrates: there is a cutoff point of the bitrate-vs-channel curve where the bitrate suddenly drops to zero, which manifest the rate constraint in the loss function. As expected, the cutoff index decreases for the model trained for smaller bitrate (larger β).
E.4 CENTERED KERNEL ALIGNMENT
To investigate the difference or similarity between latent features of Conv-based and SwinT-based models, we resort to a commonly used tool in representation learning called centered kernel alignment (CKA) (Kornblith et al., 2019). We evaluate CKA between each of the Conv latent channel and SwinT latent channel (both models are trained under the same β) over the 24 Kodak images. There are 320 channels for both Conv latent and SwinT latent, resulting a 320 × 320 CKA matrix. The result is shown in Figure 24. The latent channel is ordered by the averaged bitrate of each channel over Kodak images (same as in Section E.3). The CKA matrix has clear block structure, where high similarity region corresponds to the latent channels before the bitrate cutoff in the rate distribution curve (Figure 23).
Identification of SwinT and Conv latent channels with CKA Within the block of high similarity (from the CKA matrix), we identify the ‘less’ similar SwinT latent channels with lowest CKA values between this SwinT channel and all other Conv latent channels. For each of the identified SwinT channel, we find the Conv latent channel with the largest CKA value between the two. This way, we are able to identify latent channels of two different models with high similarity. We show the identified top 8 channels in Figure 25. The channels are indeed highly similar, up to a sign flip, even through the two model architectures are quite different. This empirical result is relevant to the literature on the identifiability of generative models (Khemakhem et al., 2020).
E.5 PROGRESSIVE DECODING
More visual examples of reconstructions of checkerboard (spatially) masked latent are provided in Figure 26.
Channel-wise progressive decoding Here we visualize the behavior of channel-wise progressive decoding of Conv and SwinT models. For both models, we order the latent channels with a heuristic importance metric: the reduction of distortion over the increase of rate if one channel is included for decoding. We start with the order of bitrate per channel, we pass the leading channels of bitrate order (zero out all rest channels) to the decoder to obtain reconstruction and calculate distortion. We plot the top 8 channels following this importance order. For each channel, we show 6 maps from top to bottom: latent values, mean prediction from the hyper decoder, standard deviation prediction from the hyper decoder, the bitmap, the reconstruction with only current channel, the reconstruction with up to current channel (all leading channels). The result is shown in Figure 27. For Conv models, usually the top 3 important channels are responsible for lightness, and two color components (blueyellow, red-green), similar to the LAB colorspace. The rest of the latent channels are responsible for adding details like texture and edges. For the Swint models, at low bitrate, there is one significantly different channel (the first column in Figure 27b), which is in a coarse scale (with smooth blocks) and responsible for the reconstruction of a nearly constant image with value close to 120 (the mean value of natural image dataset). This latent channel costs extremely small bitrate but reaches PSNR of 13dB. We tried remove this first channel, the progressive reconstruction with the rest leading 7 channels only leads to PSNR around 16dB, instead of 26dB shown in the figure.
F MORE ABLATION STUDIES
Local self-attention To see if local self-attention is the most important component in transformers, we replace it by depthwise separable convolution block15 (Han et al., 2021; El-Nouby et al., 2021), which performs similar spatial feature aggregation as self-attention, while keeping all other components the same as in the SwinT. We found this change only leads to minor degradation in RD. This suggests other components in transformers such as MLPs and skip connections may also play a big role, other than just self-attention, for the leading performance in our work and many other tasks (Dong et al., 2021).
Small depths Upon investigating the mean attention distance as shown in Figure 22, we find the last block in each of the last two encoder stages has about half of its attention heads degenerate to attending to fixed nearly pixels. This suggests redundant transformer blocks at that stage, so we remove those two blocks, i.e. from depths [2, 2, 6, 2] to [2, 2, 5, 1]. The resulting SwinT-Hyperprior has even less parameters (20.6M) than Conv-Hyperprior (21.4M) while with almost no RD loss compared to the larger model. We expect more hyperparameter search will identify models with better RD-complexity trade-off than we currently show in this work.
Deeper Conv encoder Deeper models are usually more expressive (Raghu et al., 2017) and the state-of-the-art Conv-based compression models typically use much deeper layers than the encoder in the original Hyperprior model (Ballé et al., 2018). As a sanity check on whether deeper convolutional transforms can outperform SwinT-based encoder transforms with 12 blocks, we take an existing design (Chen et al., 2021) with residual blocks and attention (sigmoid gating) layers, which has over 50 conv layers in either encoder or decoder, and more parameters than conv baseline. It indeed improves the RD in lower bitrate, but still worse than SwinT-Hyperprior, and gets much worse in higher bitrate. This is probably the reason that compression models based on this type of transforms did not report results at higher bitrates.
15Conv1×1-LayerNorm-ReLU-DSConv3×3-LayerNorm-ReLU-Conv1×1-LayerNorm-ReLU | 1. What is the focus and contribution of the paper on neural image codecs?
2. What are the strengths of the proposed approach, particularly in terms of compression efficiency and complexity?
3. How does the reviewer assess the effectiveness of the Swin-transformer in video compression?
4. What are the advantages of using Swin-transformer in neural image codecs, such as inference time and receptive field?
5. Are there any limitations or areas for improvement regarding the proposed method? | Summary Of The Paper
Review | Summary Of The Paper
The authors extended Swin-Transformer to neural image codecs and demonstrated that Swin-transformers - SwinT-ChARM (proposed) can achieve better compression efficiency than convolutional neural networks (ConvNets), while requiring fewer parameters and shorter decoding time.
Review
Nice exploration in extending Swin-Transformer to a decoder setting and build Swin-transformer based neural image codecs. The experiments show that Swin achieved better rate-distortion performance with lower complexity than existing solutions.
The authors further deomonstrated the effectiveness of Swin-transformer in video compression by enhancing scale-space-flow, a popular neural P-frame codec.
For the detailed experiments, they show that the inference time of SwinT decoder is less than that of Conv decoder, also Swin has effective receptive field, incur less redundancy across different spatial latent locations, and progressive decoding |
ICLR | Title
Transformer-based Transform Coding
Abstract
Neural data compression based on nonlinear transform coding has made great progress over the last few years, mainly due to improvements in prior models, quantization methods and nonlinear transforms. A general trend in many recent works pushing the limit of rate-distortion performance is to use ever more expensive prior models that can lead to prohibitively slow decoding. Instead, we focus on more expressive transforms that result in a better rate-distortioncomputation trade-off. Specifically, we show that nonlinear transforms built on Swin-transformers can achieve better compression efficiency than transforms built on convolutional neural networks (ConvNets), while requiring fewer parameters and shorter decoding time. Paired with a compute-efficient Channel-wise AutoRegressive Model prior, our SwinT-ChARM model outperforms VTM-12.1 by 3.68% in BD-rate on Kodak with comparable decoding speed. In P-frame video compression setting, we are able to outperform the popular ConvNet-based scalespace-flow model by 12.35% in BD-rate on UVG. We provide model scaling studies to verify the computational efficiency of the proposed solutions and conduct several analyses to reveal the source of coding gain of transformers over ConvNets, including better spatial decorrelation, flexible effective receptive field, and more localized response of latent pixels during progressive decoding.
1 INTRODUCTION
Transform coding (Goyal, 2001) is the dominant paradigm for compression of multi-media signals, and serves as the technical foundation for many successful coding standards such as JPEG, AAC, and HEVC/VVC. Codecs based on transform coding divide the task of lossy compression into three modularized components: transform, quantization, and entropy coding. All three components can be enhanced by deep neural networks: autoencoder networks are adopted as flexible nonlinear transforms, deep generative models are used as powerful learnable entropy models, and various differentiable quantization schemes are proposed to aid end-to-end training. Thanks to these advancements, we have seen rapid progress in the domain of image and video compression. Particularly, the hyperprior line of work (Ballé et al., 2018; Minnen et al., 2018; Lee et al., 2019; Agustsson et al., 2020; Minnen & Singh, 2020) has led to steady progress of neural compression performance over the past two years, reaching or even surpassing state-of-the-art traditional codecs. For example, in image compression, BPG444 was surpassed by a neural codec in 2018 (Minnen et al., 2018), and (Cheng et al., 2020; Xie et al., 2021; Ma et al., 2021; Guo et al., 2021; Wu et al., 2020) have claimed on-par or better performance than VTM (a test model of the state-of-the-art non-learned VVC standard).
One general trend in the advancement of neural image compression schemes is to develop ever more expressive yet expensive prior models based on spatial context. However, the rate-distortion improvement from context based prior modeling often comes with a hefty price tag1 in terms of decoding complexity. Noteably, all existing works that claimed on-par or better performance than VTM (Cheng et al., 2020; Xie et al., 2021; Ma et al., 2021; Guo et al., 2021; Wu et al., 2020) rely on slow and expensive spatial context based prior models. ∗Equal contribution. †Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc. 1In the extreme case when a latent-pixel-level spatial autoregressive prior is used, decoding of a single 512x768 image requires no less than 1536 interleaved executions of prior model inference and entropy decoding (assuming the latent is downsampled by a factor of 16x16).
The development of nonlinear transforms, on the other hand, are largely overlooked. This leads us to the following questions: can we achieve the same performance as that of expensive prior models by designing a more expressive transform together with simple prior models? And if so, how much more complexity in the transform is required?
Interestingly, we show that by leveraging and adapting the recent development of vision transformers, not only can we build neural codecs with simple prior models that can outperform ones built on expensive spatial autoregressive priors, but do so with smaller transform complexity compared to its convolutional counterparts, attaining a strictly better ratedistortion-complexity trade-off. As can be seen in Figure 1, our proposed neural image codec SwinT-ChARM can outperform VTM-12.1 at comparable decoding time, which, to the best of our knowledge, is a first in the neural compression literature.
As main contributions, we 1) extend SwinTransformer (Liu et al., 2021) to a decoder setting and build Swin-transformer based neural image codecs that attain better rate-distortion performance with lower complexity compared with existing solutions, 2) verify its effectiveness in video compression by enhancing scalespace-flow, a popular neural P-frame codec,
and 3) conduct extensive analysis and ablation study to explore differences between convolution and transformers, and investigate potential source of coding gain.
2 BACKGROUND & RELATED WORK
Conv-Hyperprior The seminal hyperprior architecture (Ballé et al., 2018; Minnen et al., 2018) is a two-level hierarchical variational autoencoder, consisting of a pair of encoder/decoder ga, gs, and a pair of hyper-encoder/hyper-decoder ha, hs. Given an input image x, a pair of latent y = ga(x) and hyper-latent z = ha(y) is computed. The quantized hyper-latent ẑ = Q(z) is modeled and entropycoded with a learned factorized prior. The latent y is modeled with a factorized Gaussian distribution p(y|ẑ) = N (µ, diag(σ)) whose parameter is given by the hyper-decoder (µ,σ) = hs(ẑ). The quantized version of the latent ŷ = Q(y−µ)+µ is then entropy coded and passed through decoder gs to derive reconstructed image x̂ = gs(ŷ). The tranforms ga, gs, ha, hs are all parameterized as ConvNets (for details, see Appendix A.1).
Conv-ChARM (Minnen & Singh, 2020) extends the baseline hyperprior architecture with a channel-wise auto-regressive model (ChARM)2, in which latent y is split along channel dimension into S groups (denoted as y1, . . . ,yS), and the Gaussian prior p(ys|ẑ, ŷ<s) is made autoregressive across groups where the mean/scale of ys depends on quantized latent in the previous groups ŷ<s. In practice, S = 10 provides a good balance of performance and complexity and is adopted here.
Spatial AR models Most of recent performance advancements of neural image compression is driven by the use of spatial auto-regressive/context models. Variants include causal global prediction (Guo et al., 2021), 3D context (Ma et al., 2021), block-level context (Wu et al., 2020), nonlocal context (Li et al., 2020; Qian et al., 2021). One common issue with these designs is that decoding cannot be parallelized along spatial dimensions, leading to impractical3decoding latency, especially for large resolution images.
2For details refer to Figure 11 and 12 in the Appendix. 3It is reported in (Wu et al., 2020)(Table I) that decoding time of spatial autoregressive models on a 512x768
image range from 2.6s to more than half a minute, depending on the specific designs. Also see Figure 1.
ConvNet-based transforms While the design space of prior models is extensively explored, nonlinear transforms, as an important component, have received less attention. A standard convolution encoder-decoder with GDN (Ballé et al., 2016; 2017) as activation is widely adopted in the literature. Later works introduce new transform designs, such as residual blocks with smaller kernels (Cheng et al., 2020), nonlocal (sigmoid gating) layers (Zhou et al., 2019; Chen et al., 2021), invertible neural networks (Xie et al., 2021), and PReLU as an efficient replacement of GDN (Egilmez et al., 2021).
Vision transformers Although many transform networks are proposed, they are still mainly based on ConvNets. Recently transformers (Vaswani et al., 2017) have been introduced to the vision domain and have shown performance competitive with ConvNets in many tasks, e.g. object detection (Carion et al., 2020), classification (Dosovitskiy et al., 2021), image enhancement (Chen et al., 2020), and semantic segmentation (Zheng et al., 2021). Inspired by their success, in this work we explore how vision transformers work as nonlinear transforms for image and video compression.
3 SWIN-TRANSFORMER BASED TRANSFORM CODING
Among the large number of vision transformer variants, we choose Swin Transformer (Liu et al., 2021) (hereafter referred to as SwinT) to build the nonlinear transforms, mainly because of 1) its linear complexity w.r.t. input resolution due to local window attention, and 2) its flexibility in handling varying input resolutions at test time, enabled by relative position bias and hierarchical architecture.
3.1 SWINT ENCODER AND DECODER
The original SwinT is proposed as a vision backbone, i.e. an encoder transform with downsampling. As shown in Figure 2, the SwinT encoder ga contains SwinT blocks interleaved with Patch Merge blocks. The Patch Merge block contains Space-to-Depth (for downsampling), LayerNorm, and Linear layers sequentially. SwinT block performs local self-attention within each non-overlapping window of the feature maps and preserves feature size. Consecutive SwinT blocks at the same feature size shift the window partitioning with respect to the previous block to promote information propagation across nearby windows in the previous block.
We adopt SwinT encoder as the encoder transform ga in our model, and extend it to SwinT decoder gs by reversing the order of blocks in ga, and replacing the Patch Merge block with a Patch Split block, which contains Linear, LayerNorm, Depth-to-Space (for upsampling) layers in sequence. The architectures for hyper transforms ha, hs are similar to ga, gs with different configurations.
4The ChARM architecture (Minnen & Singh, 2020) is detailed in Figure 12 of Appendix A.1.
With these four SwinT transforms, we propose two image compression models, SwinT-Hyperprior and SwinT-ChARM, whose prior and hyper prior models are respectively the same as in ConvHyperprior and Conv-ChARM introduced in Section 2. The full model architectures are shown in Figure 2 and Figure 13.
3.2 EXTENSION TO P-FRAME COMPRESSION
To investigate the effectiveness of SwinT transforms for video compression, we study one popular P-frame compression model called Scale-Space Flow (SSF) (Agustsson et al., 2020). There are three instances of Conv-Hyperprior in SSF, which are respectively for compressing I-frames, scale-space flow and residual. We propose a SwinT variant, referred to as SwinT-SSF, which is obtained by replacing Conv transforms ga, gs in the flow codec and residual codec of SSF with SwinT tranforms. To stabilize training of flow codec in SwinT-SSF, we need to remove all LayerNorm layers and reduce the window size (e.g. from 8 to 4). The baseline SSF model will be referred as Conv-SSF. Even though we build our solution on top of SSF, we believe this general extension can be applied to other ConvNet-based video compression models (Rippel et al., 2021; Hu et al., 2021) as well.
4 EXPERIMENTS AND ANALYSIS
4.1 EXPERIMENT SETUP
Training All image compression models are trained on the CLIC2020 training set. ConvHyperprior and SwinT-Hyperprior are trained with 2M batches. Conv-ChARM and SwinT-ChARM are trained with 3.5M and 3.1M steps. Each batch contains 8 random 256× 256 crops from training images. Training loss L = D + βR is a weighted combination of distortion D and bitrate R, with β being the Lagrangian multiplier steering rate-distortion trade-off. Distortion D is MSE in RGB color space. To cover a wide range of rate and distortion, for each solution, we train 5 models with β ∈ {0.003, 0.001, 0.0003, 0.0001, 0.00003}. The detailed training schedule is in Appendix B.1. For P-frame compression models, we follow the training setup of SSF. Both Conv-SSF and SwinTSSF are trained on Vimeo-90k Dataset (Xue et al., 2019) for 1M steps with learning rate 10−4, batch size of 8, crop size of 256×256, followed by 50K steps of training with learning rate 10−5 and crop size 384× 256. The models are trained with 8 β values 2γ × 10−4 : γ ∈ {0, 1, ..., 7}. We adopt one critical trick to stablize the training from (Jaegle et al., 2021; Meister et al., 2018), i.e. to forward each video sequence twice during one optimization step (mini-batch), once in the original frame order, once in the reversed frame order. Finally we add flow loss5 only between 0 and 200K steps, which we found not critical for stable training but improves the RD.
Evaluation We evaluate image compression models on 4 datasets: Kodak (Kodak, 1999), CLIC2021 testset (CLIC, 2021), Tecnick testset (Asuni & Giachetti, 2014), and JPEG-AI testset (JPEG-AI, 2020). We use BPG and VTM-12.1 to code the images in YUV444 mode, and then calculate PSNR in RGB. For a fair comparison all images are cropped to multiples of 256 to avoid padding for neural codecs.
We evaluated P-frame models on UVG (Mercat et al., 2020) 6 and MCL-JCV (Wang et al., 2016), and compare them with the test model implementation of HEVC, referred to as HEVC (HM), and open source library implementation of HEVC, refered to as HEVC (x265). To align configuration, all video codecs are evaluated in low-delay-P model with a fixed GOP size of 12.
Besides rate-distortion curves, we also evaluate different models using BD-rate (Tan et al., 2016), which represents the average bitrate savings for the same reconstruction quality. For image codecs, BD-rate is computed for each image and then averaged across all images; for video codecs, BD-rate is computed for each video and then averaged across all videos. More details on testset preprocessing, and traditional codecs configurations can be found in Appendix B.2.
5We did not observe RD improvement when applying flow loss to Conv-SSF training. 6We use the original 7 UVG sequences that are commonly used in other works (Agustsson et al., 2020).
4.2 RESULTS
RD and BD-rate for image codecs The RD curves for all compared image codecs evaluated on Kodak are shown in Figure 3a, and the relative rate reduction of each codec compared to VTM-12.1 at a range of PSNR levels is shown in Figure 3b 7.
As can be seen from Figure 3, SwinT transform consistently outperforms its convolutional counterpart; the RD-performance of SwinT-Hyperprior is on-par with Conv-ChARM, despite the simpler prior; SwinT-ChARM outperforms VTM-12.1 across a wide PSNR range. In the Appendix (Figure 28 and Figure 30), we further incorporate the results from existing literature known to us for a complete comparison. Particularly, our Conv-Hyperprior is much better than the results reported in (Minnen et al., 2018) (no context), and Conv-ChARM is on par with (Minnen & Singh, 2020).
In Table 1, we summarize the BD-rate of image codecs across all four dataset with VTM-12.1 as anchor. On average SwinT-ChARM is able to achieve 3.8% rate reduction compared to VTM12.1. The relative gain from Conv-Hyperprior to SwinT-Hyperprior is on-average 12% and that from Conv-ChARM to SwinT-ChARM is on-average 9%. Further gain over VTM-12.1 can be obtained by test-time latent optimization (Campos et al., 2019) or full model instance adaptation (van Rozendaal et al., 2021), which are out of the scope of this work.
7The relative rate-saving curves in Figure 3b is generated by first interpolating the discrete RD points (averaged across the testset) with a cubic spline, and then compare bitrate of different models at fixed PSNR.
8RD plot for the other three datasets can be found in Appendix (Figure 14, Figure 15, and Figure 16)
RD and BD-rate for video codecs For P-frame compression, we evaluated SwinT-SSF on UVG and MCL-JCV, with RD comparison shown in Figure 4. Again, SwinT transform leads to consistently better RD. Table 2 summarizes BD-rate with our reproduced Conv-SSF model as anchor. We can see that SwinT-SSF achieves an average of 11% rate saving over Conv-SSF. Additionally, we show that if SwinT transform is only applied to residual-autoencoder (labeled as SwinT-SSF-Res), it can only get about 4.6% gain, which indicates
that both flow and residual compression benefit from SwinT as encoder and decoder transforms. Note that SwinT-SSF still lags behind HM, suggesting lots of room for improvement in neural video compression. For per-video breakdown of BD-rate, see Figure 18 and Figure 17 in the Appendix.
Decoding complexity We evaluate the decoding complexity of 4 image codecs on 100 images of size 768 × 512 and show the metrics in Table 3, including decoding time, GMACs and GPU peak memory during decoding and total model parameters. The models run with PyTorch 1.9.0 on a workstation with one RTX 2080 Ti GPU. From the table, the inference time of SwinT decoder is less than that of Conv decoder. The entropy decoding time of ChARM prior is about twice than the factorized prior. The total decoding time of SwinT-based models is less than Conv-based models. In ablation study A5, we show a smaller SwinT-Hyperprior with 20.6M parameters has almost the same RD as the SwinT-Hyperprior profiled here. For details on encoding complexity, profiling setup, scaling to image resolution, please refer to Table 4 and Section D.3 in the Appendix.
Scaling behavior To see how the BD-rate varies with model size, we scale SwinTHyperprior and Conv-Hyperprior to be twice or half of the size of the base models (i.e. medium size)9. The result is shown in Figure 5. For both types of models, as we reduce the base model size, there is a sharp drop in performance, while doubling model size only leads to marginal gain. Noticeably, SwinT-Hyperpriorsmall is on-par with Conv-Hyperprior-medium even with half of the parameters, and SwinT transforms in general incur fewer MACs per parameter.
In Figure 1, we further consolidate the decoding latency and scaling behavior study into a single plot and show that SwinT-ChARM runs at comparable speed as VTM-12.1 while achiev-
ing better performance,10 as opposed to state-of-the-art neural codecs with spatial autoregressive prior that decodes orders of magnitude slower.
4.3 ANALYSIS
Latent correlation One of the motivating principles of transform coding is that simple coding can be made more effective in the transform domain than in the original signal space (Goyal, 2001; Ballé et al., 2021). A desirable transform would decorrelate the source signal so that simple scalar quantization and factorized entropy model can be applied without constraining coding performance. In most mature neural compression solutions, uniform scalar quantization is adopted together with a learned factorized or conditionally factorized Gaussian prior distribution. It is critical, then, to effectively factorize and Gaussianize the source distribution so that coding overhead can be minimized.
Specifically, in hyperprior based models (Ballé et al., 2018), ȳ , (y − µ)/σ is modeled as a standard spherical normal vector. The effectiveness of the analysis transform ga can then be evaluated by measuring how much correlation there is among different elements in ȳ. We are particularly interested in measuring the correlation between nearby spatial positions, which are heavily correlated in the source domain for natural images. In Figure 6, we visualize the normalized spatial correlation of ȳ averaged over all latent channels, and compare Conv-Hyperprior with SwinT-Hyperprior at β = 0.001. It can be observed that while both lead to small cross-correlations, Swin-Transformer does a much better job with uniformly smaller correlation values, and the observation is consistent
9detailed model configurations are provided in Appendix A.3. 10Note that it is difficult to fairly compare the decoding time of VTM and neural codecs since they run on different hardware. For more discussion please refer to Appendix D.3. 11The value with index (i, j) corresponds to the normalized cross-correlation of latents at spatial location (w, h) and (w + i, h+ j), averaged across all latent elements of all images on Kodak.
with other β values, which are provided in Figure 20 in the Appendix. This suggests that transformer based transforms incur less redundancy across different spatial latent locations compared with convolutional ones, leading to an overall better rate-distortion trade-off. The larger spatial correlation (and thus redundancy) in Conv-Hyperprior also explains why a compute-heavy spatial auto-regressive model is often needed to improve RD with convolutional based transforms (Minnen et al., 2018; Lee et al., 2019; Ma et al., 2021; Guo et al., 2021; Wu et al., 2020). Figure 6 also reveals that most of the correlation of a latent comes from the four elements surrounding it. This suggests that a checkerboard-based conditional prior model (He et al., 2021) may yield further coding gain.
Effective receptive field Intra prediction in HEVC or AV1 only rely on left and top boarders of the current coding block (Sullivan et al., 2012; Chen et al., 2018), except for intra block copy for screen content (Xu et al., 2016). We would like to see how large the effective receptive field (ERF) (Luo et al., 2017) of SwinT encoders compared to Conv encoders. The theoretical receptive field of the encoders (ga, ha ◦ ga) in SwinT-based codecs is much larger than that of Conv-based codecs. However comparing Figure 7a with 7e and Figure 7b with 7f, the ERF of SwinT encoders after training is even smaller than Conv encoders. When we examine the ERF of the released Swin transformers for classification, detection and segmentation tasks, they are all spanning the whole input image. This contrast suggests that (natural) image compression with rate-distortion objective is a local task, even with transformer-based nonlinear transforms. We further look into P-frame compression models, particularly the ERF of two types of transforms in flow codec and residual codec, as shown in Figure 7d & 7h, and Figure 7c & 7g. Clearly for flow codec, SwinT transform has much larger ERF than the convolution counterpart. For residual codec, the ERF of SwinT transforms is similar to image (I-frame) compression case. This shows of flexibility of SwinT encoders to attend to longer or shorter range depending on the tasks. To get a better picture of the behavior of attention layers in SwinT transforms, we also show the attention distance in each layer in Figure 22.
Progressive decoding The ERF in the previous section shows the behavior of the encoder transforms, here we further investigate the decoder transforms through the lens of progressive decoding (Rippel et al., 2014; Minnen & Singh, 2020; Lu et al., 2021). Initialized with the prior mean, the input to the decoder is progressively updated with the dequantized latent ŷ in terms of coding units, leading to gradually improved reconstruction quality. For the latent with shape (C,H,W ), we consider three types of coding units, i.e. per channel (1, H,W ), per pixel (C, 1, 1), per element (1, 1, 1). The coding units are ordered by the sum of prior std of all elements within each unit. The RD curves of progressive decoding for SwinT-Hyperprior and Conv-Hyperprior are shown in Figure 8a, which closely follow each other when ordered by channel or element, but significantly apart when ordered by pixel (spatial dim). Particularly, we show an extreme case when the half pixels in the latent (masked by checkerboard pattern) are updated with dequantized values, corresponding to the two scatter points in Figure 8a. One visual example (CLIC2021 test) is shown in Figure 8b under
this setup, where we can clearly see SwinT decoder achieves better reconstruction quality than the Conv decoder, mainly in terms of more localized response to a single latent pixel. This is potentially useful for region-of-interest decompression. More visual examples are shown in Figure 26.
4.4 ABLATION STUDY
Relative position bias There are two sources of positional information in SwinT transforms, namely the Space-to-Depth modules and the additive relative position bias (RPB). Even when the RPB is removed, SwinT-Hyperprior still outperforms Conv-Hyperprior across all bitrates, which indicates image compression may not require accurate relative position.
Shifted window The original motivation of shifted window design is to promotes the inter-layer feature propagation across nonoverlapping windows. Image compression performance drops slightly when there is no shifted window at all. This further suggests image compression requires local information.
The details of ablations A3-A5 in Figure 9 can be found in Section F of the appendix.
5 CONCLUSION
In this work we propose Swin transformer based transforms for image and video compression. In the image compression setting, SwinT transform consistently outperforms its convolutional counterpart. Particularly, the proposed SwinT-ChARM model outperforms VTM-12.1 at comparable decoding speed, which, to the best of our knowledge, is the first in learning-based methods. We also show the effectiveness of SwinT transforms when extended to the P-frame compression setting. Compared with convolution transforms, SwinT transforms can spatially decorrelate the latent better, have more flexible receptive field to adapt to tasks that requires either short-range (image) and long-range (motion) information, and better progressive decoding of latent pixels. While pushing the neural image compression to a new level in terms of rate-distortion-computation trade-off, we believe it is only the starting point for developing more efficient transformer-based image and video codecs.
ACKNOWLEDGMENTS
We would like to thank Amir Said for developing entropy coding and great advice on data compression in general. We would also appreciate the helpful discussions from Reza Pourreza and Hoang Le, and draft reviews from Auke Wiggers and Johann Brehmer.
Appendix
Table of Contents
A Models 15
A.1 Convolution baselines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 A.2 Swin-Transformer based compression models . . . . . . . . . . . . . . . . . . . 15 A.3 Model configurations for model size scaling study . . . . . . . . . . . . . . . . 16
B Training and Evaluation 17
B.1 Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 B.2 Traditional codec evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
C BD rate computation 19
C.1 BD rate for image codec . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 C.2 BD rate for video codec . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
D More Results 21
D.1 Image compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 D.2 Video Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 D.3 Coding complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
E Analysis 24
E.1 Spatial correlation of latent . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 E.2 Effective Receptive Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 E.3 Rate distribution across latent channels . . . . . . . . . . . . . . . . . . . . . . 25 E.4 Centered kernel alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 E.5 Progressive decoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
F More ablation studies 26
A MODELS
A.1 CONVOLUTION BASELINES
Conv-Hyperprior and Conv-ChARM The architecture of Conv-Hyperprior and ConvChARM are shown in Figure 10 and Figure 11. For both architecture, our base model (i.e. medium size) has the following hyperparameters: (C1, C2, C3, C4, C5, C6, C7) = (320, 320, 320, 320, 192, 192, 192).
A.2 SWIN-TRANSFORMER BASED COMPRESSION MODELS
SwinT-Hyperprior, SwinT-ChARM For both SwinT-Hyperprior and SwinT-ChARM, we use the same configurations: (wg, wh) = (8, 4), (C1, C2, C3, C4, C5, C6) = (128, 192, 256, 320, 192, 192), (d1, d2, d3, d4, d5, d6) = (2, 2, 6, 2, 5, 1) where C, d, and w are defined in Figure 13 and Figure 2. The head dim is 32 for all attention layers in SwinT-based models.
SwinT-SSF For SwinT transforms used in SSF variant, the first Patch Merge block is with downsample rate of 4 and two other Patch Merge blocks with downsampling rate of 2. Thus the downsampling rate for the encoder is still 16, the same as the image compression models. There are only 3 transformer stages with depths 2, 4, 2. The embedding dim is 96. The number of latent and hyper latent channels are all 192. The window size is 4 for flow codec and 8 for residual codec.
SwinT-SSF-Res This is a variant where only residual autoencoder uses SwinT transforms. Same architecture as the residual autoencoder in SwinT-SSF.
A.3 MODEL CONFIGURATIONS FOR MODEL SIZE SCALING STUDY
A.3.1 SWINT-HYPERPRIOR
Set of model hyperparameters that are common to all experiments: (d1, d2, d3, d4, d5, d6) = (2, 2, 6, 2, 5, 1) (wg, wh) = (8, 4)
SwinT-Hyperprior (small) (C1, C2, C3, C4, C5, C6) = (96, 128, 160, 192, 96, 128)
SwinT-Hyperprior (medium) (C1, C2, C3, C4, C5, C6) = (128, 192, 256, 320, 192, 192)
SwinT-Hyperprior (large) (C1, C2, C3, C4, C5, C6) = (160, 256, 352, 448, 192, 256)
A.3.2 CONV-HYPERPRIOR
Conv-Hyperprior (small) (C1, C2, C3, C4, C5, C6, C7) = (192, 192, 192, 192, 128, 128, 128)
Conv-Hyperprior (medium) (C1, C2, C3, C4, C5, C6, C7) = (320, 320, 320, 320, 192, 192, 192)
Conv-Hyperprior (large) (C1, C2, C3, C4, C5, C6, C7) = (448, 448, 448, 448, 256, 256, 256)
B TRAINING AND EVALUATION
B.1 TRAINING
All image compression models are trained on CLIC2020 training set, which contains both professional and mobile training sets, in total 1,633 high resolution natural images. Conv-Hyperprior and SwinT-Hyperprior are trained with 2M batches. Each batch contains 8 patches of size 256 × 256 randomly cropped from the training images. Learning rate starts at 10−4 and is reduced to 10−5 at 1.8M step.
For Conv-ChARM, we first train a model Conv-ChARM at β = 0.0001 from scratch for 2M steps, with it as the starting point, we continue to train other beta values Conv-ChARM-β, β ∈ B for 1.5M steps. For SwinT-ChARM-β, we load the transform weights from the checkpoint at 2M step of the pretrained SwinT-Hyperprior-β, then finetune the transforms together with the random initialized ChARM prior for 1.1M steps. Learning rate starts at 10−4 and is reduced to 10−5 for the last 100K steps.
Training loss L = D + βR is a weighted combination of distortion D and bitrate R, with β being the Lagrangian multiplier steering rate-distortion trade-off. Distortion D is MSE in RGB color space. To cover a wide range of rate and distortion, for each solution, we train 5 models with β ∈ B = {0.003, 0.001, 0.0003, 0.0001, 0.00003}. Usually we need to train longer for the model with larger bitrates (i.e. smaller β) to converge. Particularly for the results presented in this paper, We train SwinT-Hyperprior-0.00003 for 2.5M steps instead of 2M steps for the other 4 lower bitrates.
For P-frame compression models, we follow the training setup of SSF. Both Conv-SSF and SwinTSSF are trained on Vimeo-90k Dataset (Xue et al., 2019) for 1M steps with learning rate 10−4, batch size of 8, crop size of 256 × 256, followed by 50K steps of training with learning rate 10−5 and crop size12 384×256. The models are trained with 8 β values 2γ×10−4 : γ ∈ {0, 1, ..., 7}. We adopt one critical trick to stablize the training from (Jaegle et al., 2021; Meister et al., 2018), i.e. to forward each video sequence twice during one optimization step (mini-batch), once in the original frame order, once in the reversed frame order. When this trick is used, we set the batch size to be 4 instead of 8. Finally we add flow loss only between 0 and 200K steps, which we found not critical for stable training but helps improve the RD.
For all model training, Adam optimizer is used without weighted decay. Training for 2M steps takes about 10 days and 14 days respectively for Conv-Hyperprior and SwinT-Hyperprior on a single Nvidia V100 GPU. Total training time is about 7.5 days on a single Nvidia V100 GPU.
For all models, we use mixed quantization during training (Minnen & Singh, 2020), i.e. adding uniform noise to the continuous latent before passing to the prior model, subtracting prior mean from the continuous latent followed by rounding before passing to the decoder transform.
12We did not use the crop size 384 × 384 during the second stage as in the original paper because the resolution of Vimeo dataset is 448 × 256. We found in our case increasing crop size from 256 × 256 to 384× 256 in the second stage does not improve RD.
B.2 TRADITIONAL CODEC EVALUATION
In this section, we provide evaluation script used to generate results for traditional codecs.
B.2.1 IMAGE CODECS
VTM-12.1: VTM-12.1 software is built from https://vcgit.hhi.fraunhofer.de/ jvet/VVCSoftware_VTM/-/tags/VTM-12.1 and we use the script from CompressAI (https://github.com/InterDigitalInc/CompressAI/tree/efc69ea24) for dataset evaluation. Specifically, the following command is issued to gather VTM-12.1 image compression evaluation results:
python -m compressai.utils.bench vtm [path to image folder] -c [path to VVCSoftware_VTM folder]/cfg/encoder_intra_vtm.cfg -b [path to VVCSoftware_VTM folder]/bin -q 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38, 40
BPG: BPG software is obtained from https://bellard.org/bpg/ and the following commands are used for encoding and decoding.
bpgenc -e x265 -q [0 to 51] -f 444 -o [encoded heic file] [original png file] bpgdec -o [decoded png file] [encoded heic file]
B.2.2 VIDEO CODECS
HEVC (x265)
ffmpeg -y -pix_fmt yuv420p -s [resolution] -r [frame-rate] -i [input yuv420 raw video] -c:v libx265 -preset medium -crf [9, 12, 15, 18, 21, 24, 27, 30] -tune zerolatency -x265-params "keyint=12:min-keyint=12:verbose=1" [output mkv file path]
AVC (x264)
ffmpeg -y -pix_fmt yuv420p -s [resolution] -r [frame-rate] -i [input yuv420 raw video] -c:v libx264 -preset medium -crf [9, 12, 15, 18, 21, 24, 27, 30] -tune zerolatency -x264-params "keyint=12:min-keyint=12:verbose=1" [output mkv file path]
HEVC (HM)
[Path to HM folder]/bin/TAppEncoderStatic -c [Path to HM folder]/cfg/encoder_lowdelay_P_main.cfg -i [input yuv raw video] --InputBitDepth=8 -wdt [width] -hgt [height] -fr [frame-rate] -f [number of frames] -o [output yuv video] -b [encoded bitstream bin file] -ip 12 -q [12, 17, 22, 27, 32, 37, 42]
C BD RATE COMPUTATION
def Bjontegaard_Delta_Rate( # rate and psnr in ascending order rate_ref, psnr_ref, # reference rate_new, psnr_new, # new result
): min_psnr = max(psnr_ref[0], psnr_new[0], 30)
max_psnr = min(psnr_ref[-1], psnr_new[-1], 44)
log_rate_ref = log(rate_ref) log_rate_new = log(rate_new)
spline_ref = scipy.interpolate.CubicSpline( psnr_ref, log_rate_ref, bc_type=’not-a-knot’, extrapolate=True,
) spline_new = scipy.interpolate.CubicSpline(
psnr_new, log_rate_new, bc_type=’not-a-knot’, extrapolate=True,
)
delta_log_rate = ( spline_new.integrate(min_psnr, max_psnr) - spline_ref.integrate(min_psnr, max_psnr)
)
delta_rate = exp(delta_log_rate / (max_psnr - min_psnr))
return 100 * (delta_rate - 1)
C.1 BD RATE FOR IMAGE CODEC
# Evaluate BD-rate on an image dataset bd_rates = list() for image in image_dataset:
# evaluate rate and psnr on reference and new codec # for this image with different qualities rate_ref, psnr_ref = ReferenceCodec(image, qp=[...]) rate_new, psnr_new = NewImageCodec(image, beta=[...]) bd_rates.append(
Bjontegaard_Delta_Rate( rate_ref, psnr_ref, rate_new, psnr_new,
) )
# BD is computed per image and then averaged bd_rate = bd_rates.mean()
C.2 BD RATE FOR VIDEO CODEC
# Evaluate BD-rate on a video dataset bd_rates = list() for video in video_dataset:
# evaluate rate and psnr on reference and new codec # for this video with different qualities rate_ref, psnr_ref = ReferenceCodec(video, qp=[...]) rate_new, psnr_new = NewVideoCodec(video, beta=[...]) bd_rates.append(
Bjontegaard_Delta_Rate( rate_ref, psnr_ref, rate_new, psnr_new,
) )
# BD is computed per video and then averaged bd_rate = bd_rates.mean()
D MORE RESULTS
D.1 IMAGE COMPRESSION
Additional rate-distortion results on CLIC2021, Tecnick, and JPEG-AI are provided in Figure 14, Figure 15, and Figure 16.
For a complete comparison of results from existing literatures, we provide a summary RD plot of all neural image codec solutions known to us in Figure 28 on Kodak. In Figure 29 and Figure 30, we plot the percentage rate saving with BPG444 and VTM-12.1 as reference, respectively.
D.2 VIDEO COMPRESSION
In Figure 17 and Figure 18, we provide performance comparison of Conv-SSF, SwinT-SSF, HEVC (x265), and AVC (x264) with per-video breakdown.
D.3 CODING COMPLEXITY
We evaluate the decoding complexity of all neural image codecs in terms of the time for network inference and entropy coding, peak GPU memory, model size, etc. We select 100 high resolution images from CLIC testset and center crop them to three resolutions (768× 512, 1280× 768, 1792×
1024) to see how those metrics scale with image size. Batch size is one for all model inference. We run the experiment on a local workstation with one RTX 2080 Ti GPU, with PyTorch 1.9.0 and Cuda toolkit 11.1. For bit-exact entropy coding, we need to use deterministic13 convolution. The neural networks run on the single GPU and entropy coding runs on CPU with 8 threads. We follow the standard protocols to measure the inference time and peak memory of neural nets, such as GPU warm up and synchronization. File open/close is excluded from coding time measurement. MACs, GPU peak memory, and model parameter count are profiled using get model profile function of deepspeed profiler14.
We show more details on the coding complexity of neural image codecs in Figure 19, particularly the linear scaling to image resolution of both SwinT-based and Conv-based models. The break-down of encoding complexity is shown in Table 4.
For completeness, we also report the profiling for CPU coding time in Table 5 and Table 6. The evaluation setup is the same as the GPU profiling case, except models are run on the CPU instead (same host machine with Intel(R) Xeon(R) W-2123 CPU @ 3.60GHz).
Table 7 reports encoding and decoding time of VTM-12.1 under different quantization parameters (QPs), evaluated on an Intel Core i9-9940 CPU @ 3.30GHz, averaged over 24 Kodak images. As can be seen from the table, decoding time of VTM-12.1 is a function of reconstruction quality, where longer decoding time is observed for higher quality reconstruction. In Figure 1, the reported VTM12.1 decoding speed corresponds to a QP value of 28, where the bpp value is similar to that obtained by models trained with β = 0.001. It is worth pointing out that VTM-12.1 encoding process is much slower, ranging anywhere from 1 to 5 minutes per image, whereas neural codec runs much faster.
13https://pytorch.org/docs/1.9.0/generated/torch.use_deterministic_ algorithms.html?highlight=deterministic#torch.use_deterministic_ algorithms
14https://www.deepspeed.ai/tutorials/flops-profiler/ #usage-outside-the-deepspeed-runtime
E ANALYSIS
E.1 SPATIAL CORRELATION OF LATENT
We visualize the spatial correlation map for Conv-Hyperprior and SwinT-Hyperprior at different β in Figure 20.
E.2 EFFECTIVE RECEPTIVE FIELD
See Figure 21 for the effective receptive field for the composed encoding tranforms ha ◦ ga and Figure 22 for the mean attention distance visualization of each head within each transformer layer.
E.3 RATE DISTRIBUTION ACROSS LATENT CHANNELS
It is generally believed that ConvNets learn to extract various features and store them in each channel of the activations. Here we look into the features in the latent channels which are to be quantized and entropy coded to bitstreams. Particularly we order the total bitrate of each channel averaged over Kodak dataset (24 768× 256 images). The result is shown in Figure 23. We find an interesting phenomenon across models under different bitrates: there is a cutoff point of the bitrate-vs-channel curve where the bitrate suddenly drops to zero, which manifest the rate constraint in the loss function. As expected, the cutoff index decreases for the model trained for smaller bitrate (larger β).
E.4 CENTERED KERNEL ALIGNMENT
To investigate the difference or similarity between latent features of Conv-based and SwinT-based models, we resort to a commonly used tool in representation learning called centered kernel alignment (CKA) (Kornblith et al., 2019). We evaluate CKA between each of the Conv latent channel and SwinT latent channel (both models are trained under the same β) over the 24 Kodak images. There are 320 channels for both Conv latent and SwinT latent, resulting a 320 × 320 CKA matrix. The result is shown in Figure 24. The latent channel is ordered by the averaged bitrate of each channel over Kodak images (same as in Section E.3). The CKA matrix has clear block structure, where high similarity region corresponds to the latent channels before the bitrate cutoff in the rate distribution curve (Figure 23).
Identification of SwinT and Conv latent channels with CKA Within the block of high similarity (from the CKA matrix), we identify the ‘less’ similar SwinT latent channels with lowest CKA values between this SwinT channel and all other Conv latent channels. For each of the identified SwinT channel, we find the Conv latent channel with the largest CKA value between the two. This way, we are able to identify latent channels of two different models with high similarity. We show the identified top 8 channels in Figure 25. The channels are indeed highly similar, up to a sign flip, even through the two model architectures are quite different. This empirical result is relevant to the literature on the identifiability of generative models (Khemakhem et al., 2020).
E.5 PROGRESSIVE DECODING
More visual examples of reconstructions of checkerboard (spatially) masked latent are provided in Figure 26.
Channel-wise progressive decoding Here we visualize the behavior of channel-wise progressive decoding of Conv and SwinT models. For both models, we order the latent channels with a heuristic importance metric: the reduction of distortion over the increase of rate if one channel is included for decoding. We start with the order of bitrate per channel, we pass the leading channels of bitrate order (zero out all rest channels) to the decoder to obtain reconstruction and calculate distortion. We plot the top 8 channels following this importance order. For each channel, we show 6 maps from top to bottom: latent values, mean prediction from the hyper decoder, standard deviation prediction from the hyper decoder, the bitmap, the reconstruction with only current channel, the reconstruction with up to current channel (all leading channels). The result is shown in Figure 27. For Conv models, usually the top 3 important channels are responsible for lightness, and two color components (blueyellow, red-green), similar to the LAB colorspace. The rest of the latent channels are responsible for adding details like texture and edges. For the Swint models, at low bitrate, there is one significantly different channel (the first column in Figure 27b), which is in a coarse scale (with smooth blocks) and responsible for the reconstruction of a nearly constant image with value close to 120 (the mean value of natural image dataset). This latent channel costs extremely small bitrate but reaches PSNR of 13dB. We tried remove this first channel, the progressive reconstruction with the rest leading 7 channels only leads to PSNR around 16dB, instead of 26dB shown in the figure.
F MORE ABLATION STUDIES
Local self-attention To see if local self-attention is the most important component in transformers, we replace it by depthwise separable convolution block15 (Han et al., 2021; El-Nouby et al., 2021), which performs similar spatial feature aggregation as self-attention, while keeping all other components the same as in the SwinT. We found this change only leads to minor degradation in RD. This suggests other components in transformers such as MLPs and skip connections may also play a big role, other than just self-attention, for the leading performance in our work and many other tasks (Dong et al., 2021).
Small depths Upon investigating the mean attention distance as shown in Figure 22, we find the last block in each of the last two encoder stages has about half of its attention heads degenerate to attending to fixed nearly pixels. This suggests redundant transformer blocks at that stage, so we remove those two blocks, i.e. from depths [2, 2, 6, 2] to [2, 2, 5, 1]. The resulting SwinT-Hyperprior has even less parameters (20.6M) than Conv-Hyperprior (21.4M) while with almost no RD loss compared to the larger model. We expect more hyperparameter search will identify models with better RD-complexity trade-off than we currently show in this work.
Deeper Conv encoder Deeper models are usually more expressive (Raghu et al., 2017) and the state-of-the-art Conv-based compression models typically use much deeper layers than the encoder in the original Hyperprior model (Ballé et al., 2018). As a sanity check on whether deeper convolutional transforms can outperform SwinT-based encoder transforms with 12 blocks, we take an existing design (Chen et al., 2021) with residual blocks and attention (sigmoid gating) layers, which has over 50 conv layers in either encoder or decoder, and more parameters than conv baseline. It indeed improves the RD in lower bitrate, but still worse than SwinT-Hyperprior, and gets much worse in higher bitrate. This is probably the reason that compression models based on this type of transforms did not report results at higher bitrates.
15Conv1×1-LayerNorm-ReLU-DSConv3×3-LayerNorm-ReLU-Conv1×1-LayerNorm-ReLU | 1. What is the focus of the paper regarding learned image and video compression?
2. What are the strengths of the proposed approach, particularly in terms of achieving a milestone?
3. What are the weaknesses of the paper, especially regarding runtime comparisons?
4. Do you have any concerns about the thoroughness of the evaluation and ablation experiments?
5. How does the reviewer assess the significance of the paper's contributions despite some limitations? | Summary Of The Paper
Review | Summary Of The Paper
This paper addresses the problem of improving rate-distortion (RD) performance for learned image and video compression without ignoring runtime. Much of the literature on learned compression improves RD performance with more entropy models. Especially for autoregressive models, these models can have excessively slow decode times. This paper uses a less expensive (and generally less powerful) entropy model and explores the use of a transformer in the encoder and decoder transforms instead.
The result is a much faster model that still achieves near-SOTA rate-distortion performance. In particular, the SwinT-ChARM model described in this paper outperforms VVC (via VTM 12.1 a recent version of the reference implementation for H.266) with faster decode times. As far as I know, this is a milestone for learned image compression (other methods have outperformed in terms of RD but only with much slower decoders).
Review
The primary strength of the paper is reaching the empirical milestone of outperforming VTM in terms of both rate-distortion and decode runtime. This is achieved by merging existing models: swin transformers (SwinT) for transforms and a channel-wise autoregressive model (ChARM) for the entropy model.
Another strength is the thorough evaluation in terms of transform variants (SwinT vs. convolution-based), entropy models (ChARM vs. a hyperprior), architecture size (number of layers and channel depths), and application to p-frame encoding for video compression. The authors also include a thorough range of ablation experiments, look at the ability for different transforms to generate a spatially independent latent representation, and explore the effective receptive field.
One weakness of the paper is in the runtime comparison between VTM and the various learned methods, though this is generally a difficult comparison to make. The difficulty arises because VTM runs on CPU (the authors state that an i9-9940X was used), while neural networks are typically run on a GPU or TPU (the authors say that an RTX 2080 Ti was used). This leads to an apples-to-oranges comparison that should be stated more explicitly. Decode times on the CPU would also strengthen the paper.
In addition, the runtime of VTM varies with encoding quality. At least based on our tests (caveat: on a different CPU and with a different version of VTM), VTM can decode a 768x512 image in anywhere from 20ms (at the lowest quality level) up to 230ms for the highest quality (note that these numbers include disk I/O). Based on this, a more accurate claim is that SwinT-Charm decodes faster than VTM at some quality levels (very high ones), but can be much slower for high compression rates.
Although the authors don't mention it, VTM can actually be very slow to encode an image at a very high quality level (on the order of dozens of seconds). I agree that decode speed is most important for typical applications (since images are often encoded once and decoded millions of times), the relatively fast encode speed of the proposed model can be seen as a strength. |
ICLR | Title
Transformer-based Transform Coding
Abstract
Neural data compression based on nonlinear transform coding has made great progress over the last few years, mainly due to improvements in prior models, quantization methods and nonlinear transforms. A general trend in many recent works pushing the limit of rate-distortion performance is to use ever more expensive prior models that can lead to prohibitively slow decoding. Instead, we focus on more expressive transforms that result in a better rate-distortioncomputation trade-off. Specifically, we show that nonlinear transforms built on Swin-transformers can achieve better compression efficiency than transforms built on convolutional neural networks (ConvNets), while requiring fewer parameters and shorter decoding time. Paired with a compute-efficient Channel-wise AutoRegressive Model prior, our SwinT-ChARM model outperforms VTM-12.1 by 3.68% in BD-rate on Kodak with comparable decoding speed. In P-frame video compression setting, we are able to outperform the popular ConvNet-based scalespace-flow model by 12.35% in BD-rate on UVG. We provide model scaling studies to verify the computational efficiency of the proposed solutions and conduct several analyses to reveal the source of coding gain of transformers over ConvNets, including better spatial decorrelation, flexible effective receptive field, and more localized response of latent pixels during progressive decoding.
1 INTRODUCTION
Transform coding (Goyal, 2001) is the dominant paradigm for compression of multi-media signals, and serves as the technical foundation for many successful coding standards such as JPEG, AAC, and HEVC/VVC. Codecs based on transform coding divide the task of lossy compression into three modularized components: transform, quantization, and entropy coding. All three components can be enhanced by deep neural networks: autoencoder networks are adopted as flexible nonlinear transforms, deep generative models are used as powerful learnable entropy models, and various differentiable quantization schemes are proposed to aid end-to-end training. Thanks to these advancements, we have seen rapid progress in the domain of image and video compression. Particularly, the hyperprior line of work (Ballé et al., 2018; Minnen et al., 2018; Lee et al., 2019; Agustsson et al., 2020; Minnen & Singh, 2020) has led to steady progress of neural compression performance over the past two years, reaching or even surpassing state-of-the-art traditional codecs. For example, in image compression, BPG444 was surpassed by a neural codec in 2018 (Minnen et al., 2018), and (Cheng et al., 2020; Xie et al., 2021; Ma et al., 2021; Guo et al., 2021; Wu et al., 2020) have claimed on-par or better performance than VTM (a test model of the state-of-the-art non-learned VVC standard).
One general trend in the advancement of neural image compression schemes is to develop ever more expressive yet expensive prior models based on spatial context. However, the rate-distortion improvement from context based prior modeling often comes with a hefty price tag1 in terms of decoding complexity. Noteably, all existing works that claimed on-par or better performance than VTM (Cheng et al., 2020; Xie et al., 2021; Ma et al., 2021; Guo et al., 2021; Wu et al., 2020) rely on slow and expensive spatial context based prior models. ∗Equal contribution. †Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc. 1In the extreme case when a latent-pixel-level spatial autoregressive prior is used, decoding of a single 512x768 image requires no less than 1536 interleaved executions of prior model inference and entropy decoding (assuming the latent is downsampled by a factor of 16x16).
The development of nonlinear transforms, on the other hand, are largely overlooked. This leads us to the following questions: can we achieve the same performance as that of expensive prior models by designing a more expressive transform together with simple prior models? And if so, how much more complexity in the transform is required?
Interestingly, we show that by leveraging and adapting the recent development of vision transformers, not only can we build neural codecs with simple prior models that can outperform ones built on expensive spatial autoregressive priors, but do so with smaller transform complexity compared to its convolutional counterparts, attaining a strictly better ratedistortion-complexity trade-off. As can be seen in Figure 1, our proposed neural image codec SwinT-ChARM can outperform VTM-12.1 at comparable decoding time, which, to the best of our knowledge, is a first in the neural compression literature.
As main contributions, we 1) extend SwinTransformer (Liu et al., 2021) to a decoder setting and build Swin-transformer based neural image codecs that attain better rate-distortion performance with lower complexity compared with existing solutions, 2) verify its effectiveness in video compression by enhancing scalespace-flow, a popular neural P-frame codec,
and 3) conduct extensive analysis and ablation study to explore differences between convolution and transformers, and investigate potential source of coding gain.
2 BACKGROUND & RELATED WORK
Conv-Hyperprior The seminal hyperprior architecture (Ballé et al., 2018; Minnen et al., 2018) is a two-level hierarchical variational autoencoder, consisting of a pair of encoder/decoder ga, gs, and a pair of hyper-encoder/hyper-decoder ha, hs. Given an input image x, a pair of latent y = ga(x) and hyper-latent z = ha(y) is computed. The quantized hyper-latent ẑ = Q(z) is modeled and entropycoded with a learned factorized prior. The latent y is modeled with a factorized Gaussian distribution p(y|ẑ) = N (µ, diag(σ)) whose parameter is given by the hyper-decoder (µ,σ) = hs(ẑ). The quantized version of the latent ŷ = Q(y−µ)+µ is then entropy coded and passed through decoder gs to derive reconstructed image x̂ = gs(ŷ). The tranforms ga, gs, ha, hs are all parameterized as ConvNets (for details, see Appendix A.1).
Conv-ChARM (Minnen & Singh, 2020) extends the baseline hyperprior architecture with a channel-wise auto-regressive model (ChARM)2, in which latent y is split along channel dimension into S groups (denoted as y1, . . . ,yS), and the Gaussian prior p(ys|ẑ, ŷ<s) is made autoregressive across groups where the mean/scale of ys depends on quantized latent in the previous groups ŷ<s. In practice, S = 10 provides a good balance of performance and complexity and is adopted here.
Spatial AR models Most of recent performance advancements of neural image compression is driven by the use of spatial auto-regressive/context models. Variants include causal global prediction (Guo et al., 2021), 3D context (Ma et al., 2021), block-level context (Wu et al., 2020), nonlocal context (Li et al., 2020; Qian et al., 2021). One common issue with these designs is that decoding cannot be parallelized along spatial dimensions, leading to impractical3decoding latency, especially for large resolution images.
2For details refer to Figure 11 and 12 in the Appendix. 3It is reported in (Wu et al., 2020)(Table I) that decoding time of spatial autoregressive models on a 512x768
image range from 2.6s to more than half a minute, depending on the specific designs. Also see Figure 1.
ConvNet-based transforms While the design space of prior models is extensively explored, nonlinear transforms, as an important component, have received less attention. A standard convolution encoder-decoder with GDN (Ballé et al., 2016; 2017) as activation is widely adopted in the literature. Later works introduce new transform designs, such as residual blocks with smaller kernels (Cheng et al., 2020), nonlocal (sigmoid gating) layers (Zhou et al., 2019; Chen et al., 2021), invertible neural networks (Xie et al., 2021), and PReLU as an efficient replacement of GDN (Egilmez et al., 2021).
Vision transformers Although many transform networks are proposed, they are still mainly based on ConvNets. Recently transformers (Vaswani et al., 2017) have been introduced to the vision domain and have shown performance competitive with ConvNets in many tasks, e.g. object detection (Carion et al., 2020), classification (Dosovitskiy et al., 2021), image enhancement (Chen et al., 2020), and semantic segmentation (Zheng et al., 2021). Inspired by their success, in this work we explore how vision transformers work as nonlinear transforms for image and video compression.
3 SWIN-TRANSFORMER BASED TRANSFORM CODING
Among the large number of vision transformer variants, we choose Swin Transformer (Liu et al., 2021) (hereafter referred to as SwinT) to build the nonlinear transforms, mainly because of 1) its linear complexity w.r.t. input resolution due to local window attention, and 2) its flexibility in handling varying input resolutions at test time, enabled by relative position bias and hierarchical architecture.
3.1 SWINT ENCODER AND DECODER
The original SwinT is proposed as a vision backbone, i.e. an encoder transform with downsampling. As shown in Figure 2, the SwinT encoder ga contains SwinT blocks interleaved with Patch Merge blocks. The Patch Merge block contains Space-to-Depth (for downsampling), LayerNorm, and Linear layers sequentially. SwinT block performs local self-attention within each non-overlapping window of the feature maps and preserves feature size. Consecutive SwinT blocks at the same feature size shift the window partitioning with respect to the previous block to promote information propagation across nearby windows in the previous block.
We adopt SwinT encoder as the encoder transform ga in our model, and extend it to SwinT decoder gs by reversing the order of blocks in ga, and replacing the Patch Merge block with a Patch Split block, which contains Linear, LayerNorm, Depth-to-Space (for upsampling) layers in sequence. The architectures for hyper transforms ha, hs are similar to ga, gs with different configurations.
4The ChARM architecture (Minnen & Singh, 2020) is detailed in Figure 12 of Appendix A.1.
With these four SwinT transforms, we propose two image compression models, SwinT-Hyperprior and SwinT-ChARM, whose prior and hyper prior models are respectively the same as in ConvHyperprior and Conv-ChARM introduced in Section 2. The full model architectures are shown in Figure 2 and Figure 13.
3.2 EXTENSION TO P-FRAME COMPRESSION
To investigate the effectiveness of SwinT transforms for video compression, we study one popular P-frame compression model called Scale-Space Flow (SSF) (Agustsson et al., 2020). There are three instances of Conv-Hyperprior in SSF, which are respectively for compressing I-frames, scale-space flow and residual. We propose a SwinT variant, referred to as SwinT-SSF, which is obtained by replacing Conv transforms ga, gs in the flow codec and residual codec of SSF with SwinT tranforms. To stabilize training of flow codec in SwinT-SSF, we need to remove all LayerNorm layers and reduce the window size (e.g. from 8 to 4). The baseline SSF model will be referred as Conv-SSF. Even though we build our solution on top of SSF, we believe this general extension can be applied to other ConvNet-based video compression models (Rippel et al., 2021; Hu et al., 2021) as well.
4 EXPERIMENTS AND ANALYSIS
4.1 EXPERIMENT SETUP
Training All image compression models are trained on the CLIC2020 training set. ConvHyperprior and SwinT-Hyperprior are trained with 2M batches. Conv-ChARM and SwinT-ChARM are trained with 3.5M and 3.1M steps. Each batch contains 8 random 256× 256 crops from training images. Training loss L = D + βR is a weighted combination of distortion D and bitrate R, with β being the Lagrangian multiplier steering rate-distortion trade-off. Distortion D is MSE in RGB color space. To cover a wide range of rate and distortion, for each solution, we train 5 models with β ∈ {0.003, 0.001, 0.0003, 0.0001, 0.00003}. The detailed training schedule is in Appendix B.1. For P-frame compression models, we follow the training setup of SSF. Both Conv-SSF and SwinTSSF are trained on Vimeo-90k Dataset (Xue et al., 2019) for 1M steps with learning rate 10−4, batch size of 8, crop size of 256×256, followed by 50K steps of training with learning rate 10−5 and crop size 384× 256. The models are trained with 8 β values 2γ × 10−4 : γ ∈ {0, 1, ..., 7}. We adopt one critical trick to stablize the training from (Jaegle et al., 2021; Meister et al., 2018), i.e. to forward each video sequence twice during one optimization step (mini-batch), once in the original frame order, once in the reversed frame order. Finally we add flow loss5 only between 0 and 200K steps, which we found not critical for stable training but improves the RD.
Evaluation We evaluate image compression models on 4 datasets: Kodak (Kodak, 1999), CLIC2021 testset (CLIC, 2021), Tecnick testset (Asuni & Giachetti, 2014), and JPEG-AI testset (JPEG-AI, 2020). We use BPG and VTM-12.1 to code the images in YUV444 mode, and then calculate PSNR in RGB. For a fair comparison all images are cropped to multiples of 256 to avoid padding for neural codecs.
We evaluated P-frame models on UVG (Mercat et al., 2020) 6 and MCL-JCV (Wang et al., 2016), and compare them with the test model implementation of HEVC, referred to as HEVC (HM), and open source library implementation of HEVC, refered to as HEVC (x265). To align configuration, all video codecs are evaluated in low-delay-P model with a fixed GOP size of 12.
Besides rate-distortion curves, we also evaluate different models using BD-rate (Tan et al., 2016), which represents the average bitrate savings for the same reconstruction quality. For image codecs, BD-rate is computed for each image and then averaged across all images; for video codecs, BD-rate is computed for each video and then averaged across all videos. More details on testset preprocessing, and traditional codecs configurations can be found in Appendix B.2.
5We did not observe RD improvement when applying flow loss to Conv-SSF training. 6We use the original 7 UVG sequences that are commonly used in other works (Agustsson et al., 2020).
4.2 RESULTS
RD and BD-rate for image codecs The RD curves for all compared image codecs evaluated on Kodak are shown in Figure 3a, and the relative rate reduction of each codec compared to VTM-12.1 at a range of PSNR levels is shown in Figure 3b 7.
As can be seen from Figure 3, SwinT transform consistently outperforms its convolutional counterpart; the RD-performance of SwinT-Hyperprior is on-par with Conv-ChARM, despite the simpler prior; SwinT-ChARM outperforms VTM-12.1 across a wide PSNR range. In the Appendix (Figure 28 and Figure 30), we further incorporate the results from existing literature known to us for a complete comparison. Particularly, our Conv-Hyperprior is much better than the results reported in (Minnen et al., 2018) (no context), and Conv-ChARM is on par with (Minnen & Singh, 2020).
In Table 1, we summarize the BD-rate of image codecs across all four dataset with VTM-12.1 as anchor. On average SwinT-ChARM is able to achieve 3.8% rate reduction compared to VTM12.1. The relative gain from Conv-Hyperprior to SwinT-Hyperprior is on-average 12% and that from Conv-ChARM to SwinT-ChARM is on-average 9%. Further gain over VTM-12.1 can be obtained by test-time latent optimization (Campos et al., 2019) or full model instance adaptation (van Rozendaal et al., 2021), which are out of the scope of this work.
7The relative rate-saving curves in Figure 3b is generated by first interpolating the discrete RD points (averaged across the testset) with a cubic spline, and then compare bitrate of different models at fixed PSNR.
8RD plot for the other three datasets can be found in Appendix (Figure 14, Figure 15, and Figure 16)
RD and BD-rate for video codecs For P-frame compression, we evaluated SwinT-SSF on UVG and MCL-JCV, with RD comparison shown in Figure 4. Again, SwinT transform leads to consistently better RD. Table 2 summarizes BD-rate with our reproduced Conv-SSF model as anchor. We can see that SwinT-SSF achieves an average of 11% rate saving over Conv-SSF. Additionally, we show that if SwinT transform is only applied to residual-autoencoder (labeled as SwinT-SSF-Res), it can only get about 4.6% gain, which indicates
that both flow and residual compression benefit from SwinT as encoder and decoder transforms. Note that SwinT-SSF still lags behind HM, suggesting lots of room for improvement in neural video compression. For per-video breakdown of BD-rate, see Figure 18 and Figure 17 in the Appendix.
Decoding complexity We evaluate the decoding complexity of 4 image codecs on 100 images of size 768 × 512 and show the metrics in Table 3, including decoding time, GMACs and GPU peak memory during decoding and total model parameters. The models run with PyTorch 1.9.0 on a workstation with one RTX 2080 Ti GPU. From the table, the inference time of SwinT decoder is less than that of Conv decoder. The entropy decoding time of ChARM prior is about twice than the factorized prior. The total decoding time of SwinT-based models is less than Conv-based models. In ablation study A5, we show a smaller SwinT-Hyperprior with 20.6M parameters has almost the same RD as the SwinT-Hyperprior profiled here. For details on encoding complexity, profiling setup, scaling to image resolution, please refer to Table 4 and Section D.3 in the Appendix.
Scaling behavior To see how the BD-rate varies with model size, we scale SwinTHyperprior and Conv-Hyperprior to be twice or half of the size of the base models (i.e. medium size)9. The result is shown in Figure 5. For both types of models, as we reduce the base model size, there is a sharp drop in performance, while doubling model size only leads to marginal gain. Noticeably, SwinT-Hyperpriorsmall is on-par with Conv-Hyperprior-medium even with half of the parameters, and SwinT transforms in general incur fewer MACs per parameter.
In Figure 1, we further consolidate the decoding latency and scaling behavior study into a single plot and show that SwinT-ChARM runs at comparable speed as VTM-12.1 while achiev-
ing better performance,10 as opposed to state-of-the-art neural codecs with spatial autoregressive prior that decodes orders of magnitude slower.
4.3 ANALYSIS
Latent correlation One of the motivating principles of transform coding is that simple coding can be made more effective in the transform domain than in the original signal space (Goyal, 2001; Ballé et al., 2021). A desirable transform would decorrelate the source signal so that simple scalar quantization and factorized entropy model can be applied without constraining coding performance. In most mature neural compression solutions, uniform scalar quantization is adopted together with a learned factorized or conditionally factorized Gaussian prior distribution. It is critical, then, to effectively factorize and Gaussianize the source distribution so that coding overhead can be minimized.
Specifically, in hyperprior based models (Ballé et al., 2018), ȳ , (y − µ)/σ is modeled as a standard spherical normal vector. The effectiveness of the analysis transform ga can then be evaluated by measuring how much correlation there is among different elements in ȳ. We are particularly interested in measuring the correlation between nearby spatial positions, which are heavily correlated in the source domain for natural images. In Figure 6, we visualize the normalized spatial correlation of ȳ averaged over all latent channels, and compare Conv-Hyperprior with SwinT-Hyperprior at β = 0.001. It can be observed that while both lead to small cross-correlations, Swin-Transformer does a much better job with uniformly smaller correlation values, and the observation is consistent
9detailed model configurations are provided in Appendix A.3. 10Note that it is difficult to fairly compare the decoding time of VTM and neural codecs since they run on different hardware. For more discussion please refer to Appendix D.3. 11The value with index (i, j) corresponds to the normalized cross-correlation of latents at spatial location (w, h) and (w + i, h+ j), averaged across all latent elements of all images on Kodak.
with other β values, which are provided in Figure 20 in the Appendix. This suggests that transformer based transforms incur less redundancy across different spatial latent locations compared with convolutional ones, leading to an overall better rate-distortion trade-off. The larger spatial correlation (and thus redundancy) in Conv-Hyperprior also explains why a compute-heavy spatial auto-regressive model is often needed to improve RD with convolutional based transforms (Minnen et al., 2018; Lee et al., 2019; Ma et al., 2021; Guo et al., 2021; Wu et al., 2020). Figure 6 also reveals that most of the correlation of a latent comes from the four elements surrounding it. This suggests that a checkerboard-based conditional prior model (He et al., 2021) may yield further coding gain.
Effective receptive field Intra prediction in HEVC or AV1 only rely on left and top boarders of the current coding block (Sullivan et al., 2012; Chen et al., 2018), except for intra block copy for screen content (Xu et al., 2016). We would like to see how large the effective receptive field (ERF) (Luo et al., 2017) of SwinT encoders compared to Conv encoders. The theoretical receptive field of the encoders (ga, ha ◦ ga) in SwinT-based codecs is much larger than that of Conv-based codecs. However comparing Figure 7a with 7e and Figure 7b with 7f, the ERF of SwinT encoders after training is even smaller than Conv encoders. When we examine the ERF of the released Swin transformers for classification, detection and segmentation tasks, they are all spanning the whole input image. This contrast suggests that (natural) image compression with rate-distortion objective is a local task, even with transformer-based nonlinear transforms. We further look into P-frame compression models, particularly the ERF of two types of transforms in flow codec and residual codec, as shown in Figure 7d & 7h, and Figure 7c & 7g. Clearly for flow codec, SwinT transform has much larger ERF than the convolution counterpart. For residual codec, the ERF of SwinT transforms is similar to image (I-frame) compression case. This shows of flexibility of SwinT encoders to attend to longer or shorter range depending on the tasks. To get a better picture of the behavior of attention layers in SwinT transforms, we also show the attention distance in each layer in Figure 22.
Progressive decoding The ERF in the previous section shows the behavior of the encoder transforms, here we further investigate the decoder transforms through the lens of progressive decoding (Rippel et al., 2014; Minnen & Singh, 2020; Lu et al., 2021). Initialized with the prior mean, the input to the decoder is progressively updated with the dequantized latent ŷ in terms of coding units, leading to gradually improved reconstruction quality. For the latent with shape (C,H,W ), we consider three types of coding units, i.e. per channel (1, H,W ), per pixel (C, 1, 1), per element (1, 1, 1). The coding units are ordered by the sum of prior std of all elements within each unit. The RD curves of progressive decoding for SwinT-Hyperprior and Conv-Hyperprior are shown in Figure 8a, which closely follow each other when ordered by channel or element, but significantly apart when ordered by pixel (spatial dim). Particularly, we show an extreme case when the half pixels in the latent (masked by checkerboard pattern) are updated with dequantized values, corresponding to the two scatter points in Figure 8a. One visual example (CLIC2021 test) is shown in Figure 8b under
this setup, where we can clearly see SwinT decoder achieves better reconstruction quality than the Conv decoder, mainly in terms of more localized response to a single latent pixel. This is potentially useful for region-of-interest decompression. More visual examples are shown in Figure 26.
4.4 ABLATION STUDY
Relative position bias There are two sources of positional information in SwinT transforms, namely the Space-to-Depth modules and the additive relative position bias (RPB). Even when the RPB is removed, SwinT-Hyperprior still outperforms Conv-Hyperprior across all bitrates, which indicates image compression may not require accurate relative position.
Shifted window The original motivation of shifted window design is to promotes the inter-layer feature propagation across nonoverlapping windows. Image compression performance drops slightly when there is no shifted window at all. This further suggests image compression requires local information.
The details of ablations A3-A5 in Figure 9 can be found in Section F of the appendix.
5 CONCLUSION
In this work we propose Swin transformer based transforms for image and video compression. In the image compression setting, SwinT transform consistently outperforms its convolutional counterpart. Particularly, the proposed SwinT-ChARM model outperforms VTM-12.1 at comparable decoding speed, which, to the best of our knowledge, is the first in learning-based methods. We also show the effectiveness of SwinT transforms when extended to the P-frame compression setting. Compared with convolution transforms, SwinT transforms can spatially decorrelate the latent better, have more flexible receptive field to adapt to tasks that requires either short-range (image) and long-range (motion) information, and better progressive decoding of latent pixels. While pushing the neural image compression to a new level in terms of rate-distortion-computation trade-off, we believe it is only the starting point for developing more efficient transformer-based image and video codecs.
ACKNOWLEDGMENTS
We would like to thank Amir Said for developing entropy coding and great advice on data compression in general. We would also appreciate the helpful discussions from Reza Pourreza and Hoang Le, and draft reviews from Auke Wiggers and Johann Brehmer.
Appendix
Table of Contents
A Models 15
A.1 Convolution baselines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 A.2 Swin-Transformer based compression models . . . . . . . . . . . . . . . . . . . 15 A.3 Model configurations for model size scaling study . . . . . . . . . . . . . . . . 16
B Training and Evaluation 17
B.1 Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 B.2 Traditional codec evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
C BD rate computation 19
C.1 BD rate for image codec . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 C.2 BD rate for video codec . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
D More Results 21
D.1 Image compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 D.2 Video Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 D.3 Coding complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
E Analysis 24
E.1 Spatial correlation of latent . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 E.2 Effective Receptive Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 E.3 Rate distribution across latent channels . . . . . . . . . . . . . . . . . . . . . . 25 E.4 Centered kernel alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 E.5 Progressive decoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
F More ablation studies 26
A MODELS
A.1 CONVOLUTION BASELINES
Conv-Hyperprior and Conv-ChARM The architecture of Conv-Hyperprior and ConvChARM are shown in Figure 10 and Figure 11. For both architecture, our base model (i.e. medium size) has the following hyperparameters: (C1, C2, C3, C4, C5, C6, C7) = (320, 320, 320, 320, 192, 192, 192).
A.2 SWIN-TRANSFORMER BASED COMPRESSION MODELS
SwinT-Hyperprior, SwinT-ChARM For both SwinT-Hyperprior and SwinT-ChARM, we use the same configurations: (wg, wh) = (8, 4), (C1, C2, C3, C4, C5, C6) = (128, 192, 256, 320, 192, 192), (d1, d2, d3, d4, d5, d6) = (2, 2, 6, 2, 5, 1) where C, d, and w are defined in Figure 13 and Figure 2. The head dim is 32 for all attention layers in SwinT-based models.
SwinT-SSF For SwinT transforms used in SSF variant, the first Patch Merge block is with downsample rate of 4 and two other Patch Merge blocks with downsampling rate of 2. Thus the downsampling rate for the encoder is still 16, the same as the image compression models. There are only 3 transformer stages with depths 2, 4, 2. The embedding dim is 96. The number of latent and hyper latent channels are all 192. The window size is 4 for flow codec and 8 for residual codec.
SwinT-SSF-Res This is a variant where only residual autoencoder uses SwinT transforms. Same architecture as the residual autoencoder in SwinT-SSF.
A.3 MODEL CONFIGURATIONS FOR MODEL SIZE SCALING STUDY
A.3.1 SWINT-HYPERPRIOR
Set of model hyperparameters that are common to all experiments: (d1, d2, d3, d4, d5, d6) = (2, 2, 6, 2, 5, 1) (wg, wh) = (8, 4)
SwinT-Hyperprior (small) (C1, C2, C3, C4, C5, C6) = (96, 128, 160, 192, 96, 128)
SwinT-Hyperprior (medium) (C1, C2, C3, C4, C5, C6) = (128, 192, 256, 320, 192, 192)
SwinT-Hyperprior (large) (C1, C2, C3, C4, C5, C6) = (160, 256, 352, 448, 192, 256)
A.3.2 CONV-HYPERPRIOR
Conv-Hyperprior (small) (C1, C2, C3, C4, C5, C6, C7) = (192, 192, 192, 192, 128, 128, 128)
Conv-Hyperprior (medium) (C1, C2, C3, C4, C5, C6, C7) = (320, 320, 320, 320, 192, 192, 192)
Conv-Hyperprior (large) (C1, C2, C3, C4, C5, C6, C7) = (448, 448, 448, 448, 256, 256, 256)
B TRAINING AND EVALUATION
B.1 TRAINING
All image compression models are trained on CLIC2020 training set, which contains both professional and mobile training sets, in total 1,633 high resolution natural images. Conv-Hyperprior and SwinT-Hyperprior are trained with 2M batches. Each batch contains 8 patches of size 256 × 256 randomly cropped from the training images. Learning rate starts at 10−4 and is reduced to 10−5 at 1.8M step.
For Conv-ChARM, we first train a model Conv-ChARM at β = 0.0001 from scratch for 2M steps, with it as the starting point, we continue to train other beta values Conv-ChARM-β, β ∈ B for 1.5M steps. For SwinT-ChARM-β, we load the transform weights from the checkpoint at 2M step of the pretrained SwinT-Hyperprior-β, then finetune the transforms together with the random initialized ChARM prior for 1.1M steps. Learning rate starts at 10−4 and is reduced to 10−5 for the last 100K steps.
Training loss L = D + βR is a weighted combination of distortion D and bitrate R, with β being the Lagrangian multiplier steering rate-distortion trade-off. Distortion D is MSE in RGB color space. To cover a wide range of rate and distortion, for each solution, we train 5 models with β ∈ B = {0.003, 0.001, 0.0003, 0.0001, 0.00003}. Usually we need to train longer for the model with larger bitrates (i.e. smaller β) to converge. Particularly for the results presented in this paper, We train SwinT-Hyperprior-0.00003 for 2.5M steps instead of 2M steps for the other 4 lower bitrates.
For P-frame compression models, we follow the training setup of SSF. Both Conv-SSF and SwinTSSF are trained on Vimeo-90k Dataset (Xue et al., 2019) for 1M steps with learning rate 10−4, batch size of 8, crop size of 256 × 256, followed by 50K steps of training with learning rate 10−5 and crop size12 384×256. The models are trained with 8 β values 2γ×10−4 : γ ∈ {0, 1, ..., 7}. We adopt one critical trick to stablize the training from (Jaegle et al., 2021; Meister et al., 2018), i.e. to forward each video sequence twice during one optimization step (mini-batch), once in the original frame order, once in the reversed frame order. When this trick is used, we set the batch size to be 4 instead of 8. Finally we add flow loss only between 0 and 200K steps, which we found not critical for stable training but helps improve the RD.
For all model training, Adam optimizer is used without weighted decay. Training for 2M steps takes about 10 days and 14 days respectively for Conv-Hyperprior and SwinT-Hyperprior on a single Nvidia V100 GPU. Total training time is about 7.5 days on a single Nvidia V100 GPU.
For all models, we use mixed quantization during training (Minnen & Singh, 2020), i.e. adding uniform noise to the continuous latent before passing to the prior model, subtracting prior mean from the continuous latent followed by rounding before passing to the decoder transform.
12We did not use the crop size 384 × 384 during the second stage as in the original paper because the resolution of Vimeo dataset is 448 × 256. We found in our case increasing crop size from 256 × 256 to 384× 256 in the second stage does not improve RD.
B.2 TRADITIONAL CODEC EVALUATION
In this section, we provide evaluation script used to generate results for traditional codecs.
B.2.1 IMAGE CODECS
VTM-12.1: VTM-12.1 software is built from https://vcgit.hhi.fraunhofer.de/ jvet/VVCSoftware_VTM/-/tags/VTM-12.1 and we use the script from CompressAI (https://github.com/InterDigitalInc/CompressAI/tree/efc69ea24) for dataset evaluation. Specifically, the following command is issued to gather VTM-12.1 image compression evaluation results:
python -m compressai.utils.bench vtm [path to image folder] -c [path to VVCSoftware_VTM folder]/cfg/encoder_intra_vtm.cfg -b [path to VVCSoftware_VTM folder]/bin -q 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38, 40
BPG: BPG software is obtained from https://bellard.org/bpg/ and the following commands are used for encoding and decoding.
bpgenc -e x265 -q [0 to 51] -f 444 -o [encoded heic file] [original png file] bpgdec -o [decoded png file] [encoded heic file]
B.2.2 VIDEO CODECS
HEVC (x265)
ffmpeg -y -pix_fmt yuv420p -s [resolution] -r [frame-rate] -i [input yuv420 raw video] -c:v libx265 -preset medium -crf [9, 12, 15, 18, 21, 24, 27, 30] -tune zerolatency -x265-params "keyint=12:min-keyint=12:verbose=1" [output mkv file path]
AVC (x264)
ffmpeg -y -pix_fmt yuv420p -s [resolution] -r [frame-rate] -i [input yuv420 raw video] -c:v libx264 -preset medium -crf [9, 12, 15, 18, 21, 24, 27, 30] -tune zerolatency -x264-params "keyint=12:min-keyint=12:verbose=1" [output mkv file path]
HEVC (HM)
[Path to HM folder]/bin/TAppEncoderStatic -c [Path to HM folder]/cfg/encoder_lowdelay_P_main.cfg -i [input yuv raw video] --InputBitDepth=8 -wdt [width] -hgt [height] -fr [frame-rate] -f [number of frames] -o [output yuv video] -b [encoded bitstream bin file] -ip 12 -q [12, 17, 22, 27, 32, 37, 42]
C BD RATE COMPUTATION
def Bjontegaard_Delta_Rate( # rate and psnr in ascending order rate_ref, psnr_ref, # reference rate_new, psnr_new, # new result
): min_psnr = max(psnr_ref[0], psnr_new[0], 30)
max_psnr = min(psnr_ref[-1], psnr_new[-1], 44)
log_rate_ref = log(rate_ref) log_rate_new = log(rate_new)
spline_ref = scipy.interpolate.CubicSpline( psnr_ref, log_rate_ref, bc_type=’not-a-knot’, extrapolate=True,
) spline_new = scipy.interpolate.CubicSpline(
psnr_new, log_rate_new, bc_type=’not-a-knot’, extrapolate=True,
)
delta_log_rate = ( spline_new.integrate(min_psnr, max_psnr) - spline_ref.integrate(min_psnr, max_psnr)
)
delta_rate = exp(delta_log_rate / (max_psnr - min_psnr))
return 100 * (delta_rate - 1)
C.1 BD RATE FOR IMAGE CODEC
# Evaluate BD-rate on an image dataset bd_rates = list() for image in image_dataset:
# evaluate rate and psnr on reference and new codec # for this image with different qualities rate_ref, psnr_ref = ReferenceCodec(image, qp=[...]) rate_new, psnr_new = NewImageCodec(image, beta=[...]) bd_rates.append(
Bjontegaard_Delta_Rate( rate_ref, psnr_ref, rate_new, psnr_new,
) )
# BD is computed per image and then averaged bd_rate = bd_rates.mean()
C.2 BD RATE FOR VIDEO CODEC
# Evaluate BD-rate on a video dataset bd_rates = list() for video in video_dataset:
# evaluate rate and psnr on reference and new codec # for this video with different qualities rate_ref, psnr_ref = ReferenceCodec(video, qp=[...]) rate_new, psnr_new = NewVideoCodec(video, beta=[...]) bd_rates.append(
Bjontegaard_Delta_Rate( rate_ref, psnr_ref, rate_new, psnr_new,
) )
# BD is computed per video and then averaged bd_rate = bd_rates.mean()
D MORE RESULTS
D.1 IMAGE COMPRESSION
Additional rate-distortion results on CLIC2021, Tecnick, and JPEG-AI are provided in Figure 14, Figure 15, and Figure 16.
For a complete comparison of results from existing literatures, we provide a summary RD plot of all neural image codec solutions known to us in Figure 28 on Kodak. In Figure 29 and Figure 30, we plot the percentage rate saving with BPG444 and VTM-12.1 as reference, respectively.
D.2 VIDEO COMPRESSION
In Figure 17 and Figure 18, we provide performance comparison of Conv-SSF, SwinT-SSF, HEVC (x265), and AVC (x264) with per-video breakdown.
D.3 CODING COMPLEXITY
We evaluate the decoding complexity of all neural image codecs in terms of the time for network inference and entropy coding, peak GPU memory, model size, etc. We select 100 high resolution images from CLIC testset and center crop them to three resolutions (768× 512, 1280× 768, 1792×
1024) to see how those metrics scale with image size. Batch size is one for all model inference. We run the experiment on a local workstation with one RTX 2080 Ti GPU, with PyTorch 1.9.0 and Cuda toolkit 11.1. For bit-exact entropy coding, we need to use deterministic13 convolution. The neural networks run on the single GPU and entropy coding runs on CPU with 8 threads. We follow the standard protocols to measure the inference time and peak memory of neural nets, such as GPU warm up and synchronization. File open/close is excluded from coding time measurement. MACs, GPU peak memory, and model parameter count are profiled using get model profile function of deepspeed profiler14.
We show more details on the coding complexity of neural image codecs in Figure 19, particularly the linear scaling to image resolution of both SwinT-based and Conv-based models. The break-down of encoding complexity is shown in Table 4.
For completeness, we also report the profiling for CPU coding time in Table 5 and Table 6. The evaluation setup is the same as the GPU profiling case, except models are run on the CPU instead (same host machine with Intel(R) Xeon(R) W-2123 CPU @ 3.60GHz).
Table 7 reports encoding and decoding time of VTM-12.1 under different quantization parameters (QPs), evaluated on an Intel Core i9-9940 CPU @ 3.30GHz, averaged over 24 Kodak images. As can be seen from the table, decoding time of VTM-12.1 is a function of reconstruction quality, where longer decoding time is observed for higher quality reconstruction. In Figure 1, the reported VTM12.1 decoding speed corresponds to a QP value of 28, where the bpp value is similar to that obtained by models trained with β = 0.001. It is worth pointing out that VTM-12.1 encoding process is much slower, ranging anywhere from 1 to 5 minutes per image, whereas neural codec runs much faster.
13https://pytorch.org/docs/1.9.0/generated/torch.use_deterministic_ algorithms.html?highlight=deterministic#torch.use_deterministic_ algorithms
14https://www.deepspeed.ai/tutorials/flops-profiler/ #usage-outside-the-deepspeed-runtime
E ANALYSIS
E.1 SPATIAL CORRELATION OF LATENT
We visualize the spatial correlation map for Conv-Hyperprior and SwinT-Hyperprior at different β in Figure 20.
E.2 EFFECTIVE RECEPTIVE FIELD
See Figure 21 for the effective receptive field for the composed encoding tranforms ha ◦ ga and Figure 22 for the mean attention distance visualization of each head within each transformer layer.
E.3 RATE DISTRIBUTION ACROSS LATENT CHANNELS
It is generally believed that ConvNets learn to extract various features and store them in each channel of the activations. Here we look into the features in the latent channels which are to be quantized and entropy coded to bitstreams. Particularly we order the total bitrate of each channel averaged over Kodak dataset (24 768× 256 images). The result is shown in Figure 23. We find an interesting phenomenon across models under different bitrates: there is a cutoff point of the bitrate-vs-channel curve where the bitrate suddenly drops to zero, which manifest the rate constraint in the loss function. As expected, the cutoff index decreases for the model trained for smaller bitrate (larger β).
E.4 CENTERED KERNEL ALIGNMENT
To investigate the difference or similarity between latent features of Conv-based and SwinT-based models, we resort to a commonly used tool in representation learning called centered kernel alignment (CKA) (Kornblith et al., 2019). We evaluate CKA between each of the Conv latent channel and SwinT latent channel (both models are trained under the same β) over the 24 Kodak images. There are 320 channels for both Conv latent and SwinT latent, resulting a 320 × 320 CKA matrix. The result is shown in Figure 24. The latent channel is ordered by the averaged bitrate of each channel over Kodak images (same as in Section E.3). The CKA matrix has clear block structure, where high similarity region corresponds to the latent channels before the bitrate cutoff in the rate distribution curve (Figure 23).
Identification of SwinT and Conv latent channels with CKA Within the block of high similarity (from the CKA matrix), we identify the ‘less’ similar SwinT latent channels with lowest CKA values between this SwinT channel and all other Conv latent channels. For each of the identified SwinT channel, we find the Conv latent channel with the largest CKA value between the two. This way, we are able to identify latent channels of two different models with high similarity. We show the identified top 8 channels in Figure 25. The channels are indeed highly similar, up to a sign flip, even through the two model architectures are quite different. This empirical result is relevant to the literature on the identifiability of generative models (Khemakhem et al., 2020).
E.5 PROGRESSIVE DECODING
More visual examples of reconstructions of checkerboard (spatially) masked latent are provided in Figure 26.
Channel-wise progressive decoding Here we visualize the behavior of channel-wise progressive decoding of Conv and SwinT models. For both models, we order the latent channels with a heuristic importance metric: the reduction of distortion over the increase of rate if one channel is included for decoding. We start with the order of bitrate per channel, we pass the leading channels of bitrate order (zero out all rest channels) to the decoder to obtain reconstruction and calculate distortion. We plot the top 8 channels following this importance order. For each channel, we show 6 maps from top to bottom: latent values, mean prediction from the hyper decoder, standard deviation prediction from the hyper decoder, the bitmap, the reconstruction with only current channel, the reconstruction with up to current channel (all leading channels). The result is shown in Figure 27. For Conv models, usually the top 3 important channels are responsible for lightness, and two color components (blueyellow, red-green), similar to the LAB colorspace. The rest of the latent channels are responsible for adding details like texture and edges. For the Swint models, at low bitrate, there is one significantly different channel (the first column in Figure 27b), which is in a coarse scale (with smooth blocks) and responsible for the reconstruction of a nearly constant image with value close to 120 (the mean value of natural image dataset). This latent channel costs extremely small bitrate but reaches PSNR of 13dB. We tried remove this first channel, the progressive reconstruction with the rest leading 7 channels only leads to PSNR around 16dB, instead of 26dB shown in the figure.
F MORE ABLATION STUDIES
Local self-attention To see if local self-attention is the most important component in transformers, we replace it by depthwise separable convolution block15 (Han et al., 2021; El-Nouby et al., 2021), which performs similar spatial feature aggregation as self-attention, while keeping all other components the same as in the SwinT. We found this change only leads to minor degradation in RD. This suggests other components in transformers such as MLPs and skip connections may also play a big role, other than just self-attention, for the leading performance in our work and many other tasks (Dong et al., 2021).
Small depths Upon investigating the mean attention distance as shown in Figure 22, we find the last block in each of the last two encoder stages has about half of its attention heads degenerate to attending to fixed nearly pixels. This suggests redundant transformer blocks at that stage, so we remove those two blocks, i.e. from depths [2, 2, 6, 2] to [2, 2, 5, 1]. The resulting SwinT-Hyperprior has even less parameters (20.6M) than Conv-Hyperprior (21.4M) while with almost no RD loss compared to the larger model. We expect more hyperparameter search will identify models with better RD-complexity trade-off than we currently show in this work.
Deeper Conv encoder Deeper models are usually more expressive (Raghu et al., 2017) and the state-of-the-art Conv-based compression models typically use much deeper layers than the encoder in the original Hyperprior model (Ballé et al., 2018). As a sanity check on whether deeper convolutional transforms can outperform SwinT-based encoder transforms with 12 blocks, we take an existing design (Chen et al., 2021) with residual blocks and attention (sigmoid gating) layers, which has over 50 conv layers in either encoder or decoder, and more parameters than conv baseline. It indeed improves the RD in lower bitrate, but still worse than SwinT-Hyperprior, and gets much worse in higher bitrate. This is probably the reason that compression models based on this type of transforms did not report results at higher bitrates.
15Conv1×1-LayerNorm-ReLU-DSConv3×3-LayerNorm-ReLU-Conv1×1-LayerNorm-ReLU | 1. What is the focus of the paper regarding learned image and video coding?
2. What are the strengths of the proposed SwinT approach in compression performance and complexity?
3. What are the weaknesses of the paper regarding its comparisons with other works and potential improvements?
4. How does the reviewer assess the organization and comprehensiveness of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper introduces the swin-transformer for the learned image and video coding. Instead of developing more advanced and expensive entropy coding approaches, the authors focus on the transform network(i.e., networks of encoder/decoder) and improve the compression performance significantly. The experimental results are impressive. Although introducing SwinT is not very new, the authors provide a neat and practical solution, which should be encouraged. The authors also have a good balance between the coding gain and complexity.
Review
Strengths:
The paper is well organized and the authors provide very comprehensive results and analysis for SwinT in compression. For example, the analysis of ERF is quite interesting and provides some insights for the following works.
The experimental results are convincing. Although some latest works like Guo et al. achieve slightly better compression performance, the proposed approach is much neater.
Basically, this is the first transformed-based coding solution, although performance is somewhat expected considering the success in vision transformer. The overall architecture and implementation are meaningful.
Weaknesses: Some related works or experiments are suggested.
(1) It would be better to provide results if SwinT based compression also uses more expensive entropy coding approaches. Could we further boost the compression performance? (2) There are also some approaches for low-complexity entropy coding approaches like He et al., Checkerboard Context Model for Efficient Learned Image Compression. If we can use this approach to replace the expensive entropy coding method, could we achieve a better performance-complexity trade-off even we use Conv based model? Please provide some discussion. (3) There are some related works in video compression, it is suggested for discussion or comparison.
(a) FVC: A New Framework Towards Deep Video Compression in Feature Space. Hu et al. (b) ELF-VC: Efficient Learned Flexible-Rate Video Coding. Rippel et al. |
ICLR | Title
Transformer-based Transform Coding
Abstract
Neural data compression based on nonlinear transform coding has made great progress over the last few years, mainly due to improvements in prior models, quantization methods and nonlinear transforms. A general trend in many recent works pushing the limit of rate-distortion performance is to use ever more expensive prior models that can lead to prohibitively slow decoding. Instead, we focus on more expressive transforms that result in a better rate-distortioncomputation trade-off. Specifically, we show that nonlinear transforms built on Swin-transformers can achieve better compression efficiency than transforms built on convolutional neural networks (ConvNets), while requiring fewer parameters and shorter decoding time. Paired with a compute-efficient Channel-wise AutoRegressive Model prior, our SwinT-ChARM model outperforms VTM-12.1 by 3.68% in BD-rate on Kodak with comparable decoding speed. In P-frame video compression setting, we are able to outperform the popular ConvNet-based scalespace-flow model by 12.35% in BD-rate on UVG. We provide model scaling studies to verify the computational efficiency of the proposed solutions and conduct several analyses to reveal the source of coding gain of transformers over ConvNets, including better spatial decorrelation, flexible effective receptive field, and more localized response of latent pixels during progressive decoding.
1 INTRODUCTION
Transform coding (Goyal, 2001) is the dominant paradigm for compression of multi-media signals, and serves as the technical foundation for many successful coding standards such as JPEG, AAC, and HEVC/VVC. Codecs based on transform coding divide the task of lossy compression into three modularized components: transform, quantization, and entropy coding. All three components can be enhanced by deep neural networks: autoencoder networks are adopted as flexible nonlinear transforms, deep generative models are used as powerful learnable entropy models, and various differentiable quantization schemes are proposed to aid end-to-end training. Thanks to these advancements, we have seen rapid progress in the domain of image and video compression. Particularly, the hyperprior line of work (Ballé et al., 2018; Minnen et al., 2018; Lee et al., 2019; Agustsson et al., 2020; Minnen & Singh, 2020) has led to steady progress of neural compression performance over the past two years, reaching or even surpassing state-of-the-art traditional codecs. For example, in image compression, BPG444 was surpassed by a neural codec in 2018 (Minnen et al., 2018), and (Cheng et al., 2020; Xie et al., 2021; Ma et al., 2021; Guo et al., 2021; Wu et al., 2020) have claimed on-par or better performance than VTM (a test model of the state-of-the-art non-learned VVC standard).
One general trend in the advancement of neural image compression schemes is to develop ever more expressive yet expensive prior models based on spatial context. However, the rate-distortion improvement from context based prior modeling often comes with a hefty price tag1 in terms of decoding complexity. Noteably, all existing works that claimed on-par or better performance than VTM (Cheng et al., 2020; Xie et al., 2021; Ma et al., 2021; Guo et al., 2021; Wu et al., 2020) rely on slow and expensive spatial context based prior models. ∗Equal contribution. †Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc. 1In the extreme case when a latent-pixel-level spatial autoregressive prior is used, decoding of a single 512x768 image requires no less than 1536 interleaved executions of prior model inference and entropy decoding (assuming the latent is downsampled by a factor of 16x16).
The development of nonlinear transforms, on the other hand, are largely overlooked. This leads us to the following questions: can we achieve the same performance as that of expensive prior models by designing a more expressive transform together with simple prior models? And if so, how much more complexity in the transform is required?
Interestingly, we show that by leveraging and adapting the recent development of vision transformers, not only can we build neural codecs with simple prior models that can outperform ones built on expensive spatial autoregressive priors, but do so with smaller transform complexity compared to its convolutional counterparts, attaining a strictly better ratedistortion-complexity trade-off. As can be seen in Figure 1, our proposed neural image codec SwinT-ChARM can outperform VTM-12.1 at comparable decoding time, which, to the best of our knowledge, is a first in the neural compression literature.
As main contributions, we 1) extend SwinTransformer (Liu et al., 2021) to a decoder setting and build Swin-transformer based neural image codecs that attain better rate-distortion performance with lower complexity compared with existing solutions, 2) verify its effectiveness in video compression by enhancing scalespace-flow, a popular neural P-frame codec,
and 3) conduct extensive analysis and ablation study to explore differences between convolution and transformers, and investigate potential source of coding gain.
2 BACKGROUND & RELATED WORK
Conv-Hyperprior The seminal hyperprior architecture (Ballé et al., 2018; Minnen et al., 2018) is a two-level hierarchical variational autoencoder, consisting of a pair of encoder/decoder ga, gs, and a pair of hyper-encoder/hyper-decoder ha, hs. Given an input image x, a pair of latent y = ga(x) and hyper-latent z = ha(y) is computed. The quantized hyper-latent ẑ = Q(z) is modeled and entropycoded with a learned factorized prior. The latent y is modeled with a factorized Gaussian distribution p(y|ẑ) = N (µ, diag(σ)) whose parameter is given by the hyper-decoder (µ,σ) = hs(ẑ). The quantized version of the latent ŷ = Q(y−µ)+µ is then entropy coded and passed through decoder gs to derive reconstructed image x̂ = gs(ŷ). The tranforms ga, gs, ha, hs are all parameterized as ConvNets (for details, see Appendix A.1).
Conv-ChARM (Minnen & Singh, 2020) extends the baseline hyperprior architecture with a channel-wise auto-regressive model (ChARM)2, in which latent y is split along channel dimension into S groups (denoted as y1, . . . ,yS), and the Gaussian prior p(ys|ẑ, ŷ<s) is made autoregressive across groups where the mean/scale of ys depends on quantized latent in the previous groups ŷ<s. In practice, S = 10 provides a good balance of performance and complexity and is adopted here.
Spatial AR models Most of recent performance advancements of neural image compression is driven by the use of spatial auto-regressive/context models. Variants include causal global prediction (Guo et al., 2021), 3D context (Ma et al., 2021), block-level context (Wu et al., 2020), nonlocal context (Li et al., 2020; Qian et al., 2021). One common issue with these designs is that decoding cannot be parallelized along spatial dimensions, leading to impractical3decoding latency, especially for large resolution images.
2For details refer to Figure 11 and 12 in the Appendix. 3It is reported in (Wu et al., 2020)(Table I) that decoding time of spatial autoregressive models on a 512x768
image range from 2.6s to more than half a minute, depending on the specific designs. Also see Figure 1.
ConvNet-based transforms While the design space of prior models is extensively explored, nonlinear transforms, as an important component, have received less attention. A standard convolution encoder-decoder with GDN (Ballé et al., 2016; 2017) as activation is widely adopted in the literature. Later works introduce new transform designs, such as residual blocks with smaller kernels (Cheng et al., 2020), nonlocal (sigmoid gating) layers (Zhou et al., 2019; Chen et al., 2021), invertible neural networks (Xie et al., 2021), and PReLU as an efficient replacement of GDN (Egilmez et al., 2021).
Vision transformers Although many transform networks are proposed, they are still mainly based on ConvNets. Recently transformers (Vaswani et al., 2017) have been introduced to the vision domain and have shown performance competitive with ConvNets in many tasks, e.g. object detection (Carion et al., 2020), classification (Dosovitskiy et al., 2021), image enhancement (Chen et al., 2020), and semantic segmentation (Zheng et al., 2021). Inspired by their success, in this work we explore how vision transformers work as nonlinear transforms for image and video compression.
3 SWIN-TRANSFORMER BASED TRANSFORM CODING
Among the large number of vision transformer variants, we choose Swin Transformer (Liu et al., 2021) (hereafter referred to as SwinT) to build the nonlinear transforms, mainly because of 1) its linear complexity w.r.t. input resolution due to local window attention, and 2) its flexibility in handling varying input resolutions at test time, enabled by relative position bias and hierarchical architecture.
3.1 SWINT ENCODER AND DECODER
The original SwinT is proposed as a vision backbone, i.e. an encoder transform with downsampling. As shown in Figure 2, the SwinT encoder ga contains SwinT blocks interleaved with Patch Merge blocks. The Patch Merge block contains Space-to-Depth (for downsampling), LayerNorm, and Linear layers sequentially. SwinT block performs local self-attention within each non-overlapping window of the feature maps and preserves feature size. Consecutive SwinT blocks at the same feature size shift the window partitioning with respect to the previous block to promote information propagation across nearby windows in the previous block.
We adopt SwinT encoder as the encoder transform ga in our model, and extend it to SwinT decoder gs by reversing the order of blocks in ga, and replacing the Patch Merge block with a Patch Split block, which contains Linear, LayerNorm, Depth-to-Space (for upsampling) layers in sequence. The architectures for hyper transforms ha, hs are similar to ga, gs with different configurations.
4The ChARM architecture (Minnen & Singh, 2020) is detailed in Figure 12 of Appendix A.1.
With these four SwinT transforms, we propose two image compression models, SwinT-Hyperprior and SwinT-ChARM, whose prior and hyper prior models are respectively the same as in ConvHyperprior and Conv-ChARM introduced in Section 2. The full model architectures are shown in Figure 2 and Figure 13.
3.2 EXTENSION TO P-FRAME COMPRESSION
To investigate the effectiveness of SwinT transforms for video compression, we study one popular P-frame compression model called Scale-Space Flow (SSF) (Agustsson et al., 2020). There are three instances of Conv-Hyperprior in SSF, which are respectively for compressing I-frames, scale-space flow and residual. We propose a SwinT variant, referred to as SwinT-SSF, which is obtained by replacing Conv transforms ga, gs in the flow codec and residual codec of SSF with SwinT tranforms. To stabilize training of flow codec in SwinT-SSF, we need to remove all LayerNorm layers and reduce the window size (e.g. from 8 to 4). The baseline SSF model will be referred as Conv-SSF. Even though we build our solution on top of SSF, we believe this general extension can be applied to other ConvNet-based video compression models (Rippel et al., 2021; Hu et al., 2021) as well.
4 EXPERIMENTS AND ANALYSIS
4.1 EXPERIMENT SETUP
Training All image compression models are trained on the CLIC2020 training set. ConvHyperprior and SwinT-Hyperprior are trained with 2M batches. Conv-ChARM and SwinT-ChARM are trained with 3.5M and 3.1M steps. Each batch contains 8 random 256× 256 crops from training images. Training loss L = D + βR is a weighted combination of distortion D and bitrate R, with β being the Lagrangian multiplier steering rate-distortion trade-off. Distortion D is MSE in RGB color space. To cover a wide range of rate and distortion, for each solution, we train 5 models with β ∈ {0.003, 0.001, 0.0003, 0.0001, 0.00003}. The detailed training schedule is in Appendix B.1. For P-frame compression models, we follow the training setup of SSF. Both Conv-SSF and SwinTSSF are trained on Vimeo-90k Dataset (Xue et al., 2019) for 1M steps with learning rate 10−4, batch size of 8, crop size of 256×256, followed by 50K steps of training with learning rate 10−5 and crop size 384× 256. The models are trained with 8 β values 2γ × 10−4 : γ ∈ {0, 1, ..., 7}. We adopt one critical trick to stablize the training from (Jaegle et al., 2021; Meister et al., 2018), i.e. to forward each video sequence twice during one optimization step (mini-batch), once in the original frame order, once in the reversed frame order. Finally we add flow loss5 only between 0 and 200K steps, which we found not critical for stable training but improves the RD.
Evaluation We evaluate image compression models on 4 datasets: Kodak (Kodak, 1999), CLIC2021 testset (CLIC, 2021), Tecnick testset (Asuni & Giachetti, 2014), and JPEG-AI testset (JPEG-AI, 2020). We use BPG and VTM-12.1 to code the images in YUV444 mode, and then calculate PSNR in RGB. For a fair comparison all images are cropped to multiples of 256 to avoid padding for neural codecs.
We evaluated P-frame models on UVG (Mercat et al., 2020) 6 and MCL-JCV (Wang et al., 2016), and compare them with the test model implementation of HEVC, referred to as HEVC (HM), and open source library implementation of HEVC, refered to as HEVC (x265). To align configuration, all video codecs are evaluated in low-delay-P model with a fixed GOP size of 12.
Besides rate-distortion curves, we also evaluate different models using BD-rate (Tan et al., 2016), which represents the average bitrate savings for the same reconstruction quality. For image codecs, BD-rate is computed for each image and then averaged across all images; for video codecs, BD-rate is computed for each video and then averaged across all videos. More details on testset preprocessing, and traditional codecs configurations can be found in Appendix B.2.
5We did not observe RD improvement when applying flow loss to Conv-SSF training. 6We use the original 7 UVG sequences that are commonly used in other works (Agustsson et al., 2020).
4.2 RESULTS
RD and BD-rate for image codecs The RD curves for all compared image codecs evaluated on Kodak are shown in Figure 3a, and the relative rate reduction of each codec compared to VTM-12.1 at a range of PSNR levels is shown in Figure 3b 7.
As can be seen from Figure 3, SwinT transform consistently outperforms its convolutional counterpart; the RD-performance of SwinT-Hyperprior is on-par with Conv-ChARM, despite the simpler prior; SwinT-ChARM outperforms VTM-12.1 across a wide PSNR range. In the Appendix (Figure 28 and Figure 30), we further incorporate the results from existing literature known to us for a complete comparison. Particularly, our Conv-Hyperprior is much better than the results reported in (Minnen et al., 2018) (no context), and Conv-ChARM is on par with (Minnen & Singh, 2020).
In Table 1, we summarize the BD-rate of image codecs across all four dataset with VTM-12.1 as anchor. On average SwinT-ChARM is able to achieve 3.8% rate reduction compared to VTM12.1. The relative gain from Conv-Hyperprior to SwinT-Hyperprior is on-average 12% and that from Conv-ChARM to SwinT-ChARM is on-average 9%. Further gain over VTM-12.1 can be obtained by test-time latent optimization (Campos et al., 2019) or full model instance adaptation (van Rozendaal et al., 2021), which are out of the scope of this work.
7The relative rate-saving curves in Figure 3b is generated by first interpolating the discrete RD points (averaged across the testset) with a cubic spline, and then compare bitrate of different models at fixed PSNR.
8RD plot for the other three datasets can be found in Appendix (Figure 14, Figure 15, and Figure 16)
RD and BD-rate for video codecs For P-frame compression, we evaluated SwinT-SSF on UVG and MCL-JCV, with RD comparison shown in Figure 4. Again, SwinT transform leads to consistently better RD. Table 2 summarizes BD-rate with our reproduced Conv-SSF model as anchor. We can see that SwinT-SSF achieves an average of 11% rate saving over Conv-SSF. Additionally, we show that if SwinT transform is only applied to residual-autoencoder (labeled as SwinT-SSF-Res), it can only get about 4.6% gain, which indicates
that both flow and residual compression benefit from SwinT as encoder and decoder transforms. Note that SwinT-SSF still lags behind HM, suggesting lots of room for improvement in neural video compression. For per-video breakdown of BD-rate, see Figure 18 and Figure 17 in the Appendix.
Decoding complexity We evaluate the decoding complexity of 4 image codecs on 100 images of size 768 × 512 and show the metrics in Table 3, including decoding time, GMACs and GPU peak memory during decoding and total model parameters. The models run with PyTorch 1.9.0 on a workstation with one RTX 2080 Ti GPU. From the table, the inference time of SwinT decoder is less than that of Conv decoder. The entropy decoding time of ChARM prior is about twice than the factorized prior. The total decoding time of SwinT-based models is less than Conv-based models. In ablation study A5, we show a smaller SwinT-Hyperprior with 20.6M parameters has almost the same RD as the SwinT-Hyperprior profiled here. For details on encoding complexity, profiling setup, scaling to image resolution, please refer to Table 4 and Section D.3 in the Appendix.
Scaling behavior To see how the BD-rate varies with model size, we scale SwinTHyperprior and Conv-Hyperprior to be twice or half of the size of the base models (i.e. medium size)9. The result is shown in Figure 5. For both types of models, as we reduce the base model size, there is a sharp drop in performance, while doubling model size only leads to marginal gain. Noticeably, SwinT-Hyperpriorsmall is on-par with Conv-Hyperprior-medium even with half of the parameters, and SwinT transforms in general incur fewer MACs per parameter.
In Figure 1, we further consolidate the decoding latency and scaling behavior study into a single plot and show that SwinT-ChARM runs at comparable speed as VTM-12.1 while achiev-
ing better performance,10 as opposed to state-of-the-art neural codecs with spatial autoregressive prior that decodes orders of magnitude slower.
4.3 ANALYSIS
Latent correlation One of the motivating principles of transform coding is that simple coding can be made more effective in the transform domain than in the original signal space (Goyal, 2001; Ballé et al., 2021). A desirable transform would decorrelate the source signal so that simple scalar quantization and factorized entropy model can be applied without constraining coding performance. In most mature neural compression solutions, uniform scalar quantization is adopted together with a learned factorized or conditionally factorized Gaussian prior distribution. It is critical, then, to effectively factorize and Gaussianize the source distribution so that coding overhead can be minimized.
Specifically, in hyperprior based models (Ballé et al., 2018), ȳ , (y − µ)/σ is modeled as a standard spherical normal vector. The effectiveness of the analysis transform ga can then be evaluated by measuring how much correlation there is among different elements in ȳ. We are particularly interested in measuring the correlation between nearby spatial positions, which are heavily correlated in the source domain for natural images. In Figure 6, we visualize the normalized spatial correlation of ȳ averaged over all latent channels, and compare Conv-Hyperprior with SwinT-Hyperprior at β = 0.001. It can be observed that while both lead to small cross-correlations, Swin-Transformer does a much better job with uniformly smaller correlation values, and the observation is consistent
9detailed model configurations are provided in Appendix A.3. 10Note that it is difficult to fairly compare the decoding time of VTM and neural codecs since they run on different hardware. For more discussion please refer to Appendix D.3. 11The value with index (i, j) corresponds to the normalized cross-correlation of latents at spatial location (w, h) and (w + i, h+ j), averaged across all latent elements of all images on Kodak.
with other β values, which are provided in Figure 20 in the Appendix. This suggests that transformer based transforms incur less redundancy across different spatial latent locations compared with convolutional ones, leading to an overall better rate-distortion trade-off. The larger spatial correlation (and thus redundancy) in Conv-Hyperprior also explains why a compute-heavy spatial auto-regressive model is often needed to improve RD with convolutional based transforms (Minnen et al., 2018; Lee et al., 2019; Ma et al., 2021; Guo et al., 2021; Wu et al., 2020). Figure 6 also reveals that most of the correlation of a latent comes from the four elements surrounding it. This suggests that a checkerboard-based conditional prior model (He et al., 2021) may yield further coding gain.
Effective receptive field Intra prediction in HEVC or AV1 only rely on left and top boarders of the current coding block (Sullivan et al., 2012; Chen et al., 2018), except for intra block copy for screen content (Xu et al., 2016). We would like to see how large the effective receptive field (ERF) (Luo et al., 2017) of SwinT encoders compared to Conv encoders. The theoretical receptive field of the encoders (ga, ha ◦ ga) in SwinT-based codecs is much larger than that of Conv-based codecs. However comparing Figure 7a with 7e and Figure 7b with 7f, the ERF of SwinT encoders after training is even smaller than Conv encoders. When we examine the ERF of the released Swin transformers for classification, detection and segmentation tasks, they are all spanning the whole input image. This contrast suggests that (natural) image compression with rate-distortion objective is a local task, even with transformer-based nonlinear transforms. We further look into P-frame compression models, particularly the ERF of two types of transforms in flow codec and residual codec, as shown in Figure 7d & 7h, and Figure 7c & 7g. Clearly for flow codec, SwinT transform has much larger ERF than the convolution counterpart. For residual codec, the ERF of SwinT transforms is similar to image (I-frame) compression case. This shows of flexibility of SwinT encoders to attend to longer or shorter range depending on the tasks. To get a better picture of the behavior of attention layers in SwinT transforms, we also show the attention distance in each layer in Figure 22.
Progressive decoding The ERF in the previous section shows the behavior of the encoder transforms, here we further investigate the decoder transforms through the lens of progressive decoding (Rippel et al., 2014; Minnen & Singh, 2020; Lu et al., 2021). Initialized with the prior mean, the input to the decoder is progressively updated with the dequantized latent ŷ in terms of coding units, leading to gradually improved reconstruction quality. For the latent with shape (C,H,W ), we consider three types of coding units, i.e. per channel (1, H,W ), per pixel (C, 1, 1), per element (1, 1, 1). The coding units are ordered by the sum of prior std of all elements within each unit. The RD curves of progressive decoding for SwinT-Hyperprior and Conv-Hyperprior are shown in Figure 8a, which closely follow each other when ordered by channel or element, but significantly apart when ordered by pixel (spatial dim). Particularly, we show an extreme case when the half pixels in the latent (masked by checkerboard pattern) are updated with dequantized values, corresponding to the two scatter points in Figure 8a. One visual example (CLIC2021 test) is shown in Figure 8b under
this setup, where we can clearly see SwinT decoder achieves better reconstruction quality than the Conv decoder, mainly in terms of more localized response to a single latent pixel. This is potentially useful for region-of-interest decompression. More visual examples are shown in Figure 26.
4.4 ABLATION STUDY
Relative position bias There are two sources of positional information in SwinT transforms, namely the Space-to-Depth modules and the additive relative position bias (RPB). Even when the RPB is removed, SwinT-Hyperprior still outperforms Conv-Hyperprior across all bitrates, which indicates image compression may not require accurate relative position.
Shifted window The original motivation of shifted window design is to promotes the inter-layer feature propagation across nonoverlapping windows. Image compression performance drops slightly when there is no shifted window at all. This further suggests image compression requires local information.
The details of ablations A3-A5 in Figure 9 can be found in Section F of the appendix.
5 CONCLUSION
In this work we propose Swin transformer based transforms for image and video compression. In the image compression setting, SwinT transform consistently outperforms its convolutional counterpart. Particularly, the proposed SwinT-ChARM model outperforms VTM-12.1 at comparable decoding speed, which, to the best of our knowledge, is the first in learning-based methods. We also show the effectiveness of SwinT transforms when extended to the P-frame compression setting. Compared with convolution transforms, SwinT transforms can spatially decorrelate the latent better, have more flexible receptive field to adapt to tasks that requires either short-range (image) and long-range (motion) information, and better progressive decoding of latent pixels. While pushing the neural image compression to a new level in terms of rate-distortion-computation trade-off, we believe it is only the starting point for developing more efficient transformer-based image and video codecs.
ACKNOWLEDGMENTS
We would like to thank Amir Said for developing entropy coding and great advice on data compression in general. We would also appreciate the helpful discussions from Reza Pourreza and Hoang Le, and draft reviews from Auke Wiggers and Johann Brehmer.
Appendix
Table of Contents
A Models 15
A.1 Convolution baselines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 A.2 Swin-Transformer based compression models . . . . . . . . . . . . . . . . . . . 15 A.3 Model configurations for model size scaling study . . . . . . . . . . . . . . . . 16
B Training and Evaluation 17
B.1 Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 B.2 Traditional codec evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
C BD rate computation 19
C.1 BD rate for image codec . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 C.2 BD rate for video codec . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
D More Results 21
D.1 Image compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 D.2 Video Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 D.3 Coding complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
E Analysis 24
E.1 Spatial correlation of latent . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 E.2 Effective Receptive Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 E.3 Rate distribution across latent channels . . . . . . . . . . . . . . . . . . . . . . 25 E.4 Centered kernel alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 E.5 Progressive decoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
F More ablation studies 26
A MODELS
A.1 CONVOLUTION BASELINES
Conv-Hyperprior and Conv-ChARM The architecture of Conv-Hyperprior and ConvChARM are shown in Figure 10 and Figure 11. For both architecture, our base model (i.e. medium size) has the following hyperparameters: (C1, C2, C3, C4, C5, C6, C7) = (320, 320, 320, 320, 192, 192, 192).
A.2 SWIN-TRANSFORMER BASED COMPRESSION MODELS
SwinT-Hyperprior, SwinT-ChARM For both SwinT-Hyperprior and SwinT-ChARM, we use the same configurations: (wg, wh) = (8, 4), (C1, C2, C3, C4, C5, C6) = (128, 192, 256, 320, 192, 192), (d1, d2, d3, d4, d5, d6) = (2, 2, 6, 2, 5, 1) where C, d, and w are defined in Figure 13 and Figure 2. The head dim is 32 for all attention layers in SwinT-based models.
SwinT-SSF For SwinT transforms used in SSF variant, the first Patch Merge block is with downsample rate of 4 and two other Patch Merge blocks with downsampling rate of 2. Thus the downsampling rate for the encoder is still 16, the same as the image compression models. There are only 3 transformer stages with depths 2, 4, 2. The embedding dim is 96. The number of latent and hyper latent channels are all 192. The window size is 4 for flow codec and 8 for residual codec.
SwinT-SSF-Res This is a variant where only residual autoencoder uses SwinT transforms. Same architecture as the residual autoencoder in SwinT-SSF.
A.3 MODEL CONFIGURATIONS FOR MODEL SIZE SCALING STUDY
A.3.1 SWINT-HYPERPRIOR
Set of model hyperparameters that are common to all experiments: (d1, d2, d3, d4, d5, d6) = (2, 2, 6, 2, 5, 1) (wg, wh) = (8, 4)
SwinT-Hyperprior (small) (C1, C2, C3, C4, C5, C6) = (96, 128, 160, 192, 96, 128)
SwinT-Hyperprior (medium) (C1, C2, C3, C4, C5, C6) = (128, 192, 256, 320, 192, 192)
SwinT-Hyperprior (large) (C1, C2, C3, C4, C5, C6) = (160, 256, 352, 448, 192, 256)
A.3.2 CONV-HYPERPRIOR
Conv-Hyperprior (small) (C1, C2, C3, C4, C5, C6, C7) = (192, 192, 192, 192, 128, 128, 128)
Conv-Hyperprior (medium) (C1, C2, C3, C4, C5, C6, C7) = (320, 320, 320, 320, 192, 192, 192)
Conv-Hyperprior (large) (C1, C2, C3, C4, C5, C6, C7) = (448, 448, 448, 448, 256, 256, 256)
B TRAINING AND EVALUATION
B.1 TRAINING
All image compression models are trained on CLIC2020 training set, which contains both professional and mobile training sets, in total 1,633 high resolution natural images. Conv-Hyperprior and SwinT-Hyperprior are trained with 2M batches. Each batch contains 8 patches of size 256 × 256 randomly cropped from the training images. Learning rate starts at 10−4 and is reduced to 10−5 at 1.8M step.
For Conv-ChARM, we first train a model Conv-ChARM at β = 0.0001 from scratch for 2M steps, with it as the starting point, we continue to train other beta values Conv-ChARM-β, β ∈ B for 1.5M steps. For SwinT-ChARM-β, we load the transform weights from the checkpoint at 2M step of the pretrained SwinT-Hyperprior-β, then finetune the transforms together with the random initialized ChARM prior for 1.1M steps. Learning rate starts at 10−4 and is reduced to 10−5 for the last 100K steps.
Training loss L = D + βR is a weighted combination of distortion D and bitrate R, with β being the Lagrangian multiplier steering rate-distortion trade-off. Distortion D is MSE in RGB color space. To cover a wide range of rate and distortion, for each solution, we train 5 models with β ∈ B = {0.003, 0.001, 0.0003, 0.0001, 0.00003}. Usually we need to train longer for the model with larger bitrates (i.e. smaller β) to converge. Particularly for the results presented in this paper, We train SwinT-Hyperprior-0.00003 for 2.5M steps instead of 2M steps for the other 4 lower bitrates.
For P-frame compression models, we follow the training setup of SSF. Both Conv-SSF and SwinTSSF are trained on Vimeo-90k Dataset (Xue et al., 2019) for 1M steps with learning rate 10−4, batch size of 8, crop size of 256 × 256, followed by 50K steps of training with learning rate 10−5 and crop size12 384×256. The models are trained with 8 β values 2γ×10−4 : γ ∈ {0, 1, ..., 7}. We adopt one critical trick to stablize the training from (Jaegle et al., 2021; Meister et al., 2018), i.e. to forward each video sequence twice during one optimization step (mini-batch), once in the original frame order, once in the reversed frame order. When this trick is used, we set the batch size to be 4 instead of 8. Finally we add flow loss only between 0 and 200K steps, which we found not critical for stable training but helps improve the RD.
For all model training, Adam optimizer is used without weighted decay. Training for 2M steps takes about 10 days and 14 days respectively for Conv-Hyperprior and SwinT-Hyperprior on a single Nvidia V100 GPU. Total training time is about 7.5 days on a single Nvidia V100 GPU.
For all models, we use mixed quantization during training (Minnen & Singh, 2020), i.e. adding uniform noise to the continuous latent before passing to the prior model, subtracting prior mean from the continuous latent followed by rounding before passing to the decoder transform.
12We did not use the crop size 384 × 384 during the second stage as in the original paper because the resolution of Vimeo dataset is 448 × 256. We found in our case increasing crop size from 256 × 256 to 384× 256 in the second stage does not improve RD.
B.2 TRADITIONAL CODEC EVALUATION
In this section, we provide evaluation script used to generate results for traditional codecs.
B.2.1 IMAGE CODECS
VTM-12.1: VTM-12.1 software is built from https://vcgit.hhi.fraunhofer.de/ jvet/VVCSoftware_VTM/-/tags/VTM-12.1 and we use the script from CompressAI (https://github.com/InterDigitalInc/CompressAI/tree/efc69ea24) for dataset evaluation. Specifically, the following command is issued to gather VTM-12.1 image compression evaluation results:
python -m compressai.utils.bench vtm [path to image folder] -c [path to VVCSoftware_VTM folder]/cfg/encoder_intra_vtm.cfg -b [path to VVCSoftware_VTM folder]/bin -q 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38, 40
BPG: BPG software is obtained from https://bellard.org/bpg/ and the following commands are used for encoding and decoding.
bpgenc -e x265 -q [0 to 51] -f 444 -o [encoded heic file] [original png file] bpgdec -o [decoded png file] [encoded heic file]
B.2.2 VIDEO CODECS
HEVC (x265)
ffmpeg -y -pix_fmt yuv420p -s [resolution] -r [frame-rate] -i [input yuv420 raw video] -c:v libx265 -preset medium -crf [9, 12, 15, 18, 21, 24, 27, 30] -tune zerolatency -x265-params "keyint=12:min-keyint=12:verbose=1" [output mkv file path]
AVC (x264)
ffmpeg -y -pix_fmt yuv420p -s [resolution] -r [frame-rate] -i [input yuv420 raw video] -c:v libx264 -preset medium -crf [9, 12, 15, 18, 21, 24, 27, 30] -tune zerolatency -x264-params "keyint=12:min-keyint=12:verbose=1" [output mkv file path]
HEVC (HM)
[Path to HM folder]/bin/TAppEncoderStatic -c [Path to HM folder]/cfg/encoder_lowdelay_P_main.cfg -i [input yuv raw video] --InputBitDepth=8 -wdt [width] -hgt [height] -fr [frame-rate] -f [number of frames] -o [output yuv video] -b [encoded bitstream bin file] -ip 12 -q [12, 17, 22, 27, 32, 37, 42]
C BD RATE COMPUTATION
def Bjontegaard_Delta_Rate( # rate and psnr in ascending order rate_ref, psnr_ref, # reference rate_new, psnr_new, # new result
): min_psnr = max(psnr_ref[0], psnr_new[0], 30)
max_psnr = min(psnr_ref[-1], psnr_new[-1], 44)
log_rate_ref = log(rate_ref) log_rate_new = log(rate_new)
spline_ref = scipy.interpolate.CubicSpline( psnr_ref, log_rate_ref, bc_type=’not-a-knot’, extrapolate=True,
) spline_new = scipy.interpolate.CubicSpline(
psnr_new, log_rate_new, bc_type=’not-a-knot’, extrapolate=True,
)
delta_log_rate = ( spline_new.integrate(min_psnr, max_psnr) - spline_ref.integrate(min_psnr, max_psnr)
)
delta_rate = exp(delta_log_rate / (max_psnr - min_psnr))
return 100 * (delta_rate - 1)
C.1 BD RATE FOR IMAGE CODEC
# Evaluate BD-rate on an image dataset bd_rates = list() for image in image_dataset:
# evaluate rate and psnr on reference and new codec # for this image with different qualities rate_ref, psnr_ref = ReferenceCodec(image, qp=[...]) rate_new, psnr_new = NewImageCodec(image, beta=[...]) bd_rates.append(
Bjontegaard_Delta_Rate( rate_ref, psnr_ref, rate_new, psnr_new,
) )
# BD is computed per image and then averaged bd_rate = bd_rates.mean()
C.2 BD RATE FOR VIDEO CODEC
# Evaluate BD-rate on a video dataset bd_rates = list() for video in video_dataset:
# evaluate rate and psnr on reference and new codec # for this video with different qualities rate_ref, psnr_ref = ReferenceCodec(video, qp=[...]) rate_new, psnr_new = NewVideoCodec(video, beta=[...]) bd_rates.append(
Bjontegaard_Delta_Rate( rate_ref, psnr_ref, rate_new, psnr_new,
) )
# BD is computed per video and then averaged bd_rate = bd_rates.mean()
D MORE RESULTS
D.1 IMAGE COMPRESSION
Additional rate-distortion results on CLIC2021, Tecnick, and JPEG-AI are provided in Figure 14, Figure 15, and Figure 16.
For a complete comparison of results from existing literatures, we provide a summary RD plot of all neural image codec solutions known to us in Figure 28 on Kodak. In Figure 29 and Figure 30, we plot the percentage rate saving with BPG444 and VTM-12.1 as reference, respectively.
D.2 VIDEO COMPRESSION
In Figure 17 and Figure 18, we provide performance comparison of Conv-SSF, SwinT-SSF, HEVC (x265), and AVC (x264) with per-video breakdown.
D.3 CODING COMPLEXITY
We evaluate the decoding complexity of all neural image codecs in terms of the time for network inference and entropy coding, peak GPU memory, model size, etc. We select 100 high resolution images from CLIC testset and center crop them to three resolutions (768× 512, 1280× 768, 1792×
1024) to see how those metrics scale with image size. Batch size is one for all model inference. We run the experiment on a local workstation with one RTX 2080 Ti GPU, with PyTorch 1.9.0 and Cuda toolkit 11.1. For bit-exact entropy coding, we need to use deterministic13 convolution. The neural networks run on the single GPU and entropy coding runs on CPU with 8 threads. We follow the standard protocols to measure the inference time and peak memory of neural nets, such as GPU warm up and synchronization. File open/close is excluded from coding time measurement. MACs, GPU peak memory, and model parameter count are profiled using get model profile function of deepspeed profiler14.
We show more details on the coding complexity of neural image codecs in Figure 19, particularly the linear scaling to image resolution of both SwinT-based and Conv-based models. The break-down of encoding complexity is shown in Table 4.
For completeness, we also report the profiling for CPU coding time in Table 5 and Table 6. The evaluation setup is the same as the GPU profiling case, except models are run on the CPU instead (same host machine with Intel(R) Xeon(R) W-2123 CPU @ 3.60GHz).
Table 7 reports encoding and decoding time of VTM-12.1 under different quantization parameters (QPs), evaluated on an Intel Core i9-9940 CPU @ 3.30GHz, averaged over 24 Kodak images. As can be seen from the table, decoding time of VTM-12.1 is a function of reconstruction quality, where longer decoding time is observed for higher quality reconstruction. In Figure 1, the reported VTM12.1 decoding speed corresponds to a QP value of 28, where the bpp value is similar to that obtained by models trained with β = 0.001. It is worth pointing out that VTM-12.1 encoding process is much slower, ranging anywhere from 1 to 5 minutes per image, whereas neural codec runs much faster.
13https://pytorch.org/docs/1.9.0/generated/torch.use_deterministic_ algorithms.html?highlight=deterministic#torch.use_deterministic_ algorithms
14https://www.deepspeed.ai/tutorials/flops-profiler/ #usage-outside-the-deepspeed-runtime
E ANALYSIS
E.1 SPATIAL CORRELATION OF LATENT
We visualize the spatial correlation map for Conv-Hyperprior and SwinT-Hyperprior at different β in Figure 20.
E.2 EFFECTIVE RECEPTIVE FIELD
See Figure 21 for the effective receptive field for the composed encoding tranforms ha ◦ ga and Figure 22 for the mean attention distance visualization of each head within each transformer layer.
E.3 RATE DISTRIBUTION ACROSS LATENT CHANNELS
It is generally believed that ConvNets learn to extract various features and store them in each channel of the activations. Here we look into the features in the latent channels which are to be quantized and entropy coded to bitstreams. Particularly we order the total bitrate of each channel averaged over Kodak dataset (24 768× 256 images). The result is shown in Figure 23. We find an interesting phenomenon across models under different bitrates: there is a cutoff point of the bitrate-vs-channel curve where the bitrate suddenly drops to zero, which manifest the rate constraint in the loss function. As expected, the cutoff index decreases for the model trained for smaller bitrate (larger β).
E.4 CENTERED KERNEL ALIGNMENT
To investigate the difference or similarity between latent features of Conv-based and SwinT-based models, we resort to a commonly used tool in representation learning called centered kernel alignment (CKA) (Kornblith et al., 2019). We evaluate CKA between each of the Conv latent channel and SwinT latent channel (both models are trained under the same β) over the 24 Kodak images. There are 320 channels for both Conv latent and SwinT latent, resulting a 320 × 320 CKA matrix. The result is shown in Figure 24. The latent channel is ordered by the averaged bitrate of each channel over Kodak images (same as in Section E.3). The CKA matrix has clear block structure, where high similarity region corresponds to the latent channels before the bitrate cutoff in the rate distribution curve (Figure 23).
Identification of SwinT and Conv latent channels with CKA Within the block of high similarity (from the CKA matrix), we identify the ‘less’ similar SwinT latent channels with lowest CKA values between this SwinT channel and all other Conv latent channels. For each of the identified SwinT channel, we find the Conv latent channel with the largest CKA value between the two. This way, we are able to identify latent channels of two different models with high similarity. We show the identified top 8 channels in Figure 25. The channels are indeed highly similar, up to a sign flip, even through the two model architectures are quite different. This empirical result is relevant to the literature on the identifiability of generative models (Khemakhem et al., 2020).
E.5 PROGRESSIVE DECODING
More visual examples of reconstructions of checkerboard (spatially) masked latent are provided in Figure 26.
Channel-wise progressive decoding Here we visualize the behavior of channel-wise progressive decoding of Conv and SwinT models. For both models, we order the latent channels with a heuristic importance metric: the reduction of distortion over the increase of rate if one channel is included for decoding. We start with the order of bitrate per channel, we pass the leading channels of bitrate order (zero out all rest channels) to the decoder to obtain reconstruction and calculate distortion. We plot the top 8 channels following this importance order. For each channel, we show 6 maps from top to bottom: latent values, mean prediction from the hyper decoder, standard deviation prediction from the hyper decoder, the bitmap, the reconstruction with only current channel, the reconstruction with up to current channel (all leading channels). The result is shown in Figure 27. For Conv models, usually the top 3 important channels are responsible for lightness, and two color components (blueyellow, red-green), similar to the LAB colorspace. The rest of the latent channels are responsible for adding details like texture and edges. For the Swint models, at low bitrate, there is one significantly different channel (the first column in Figure 27b), which is in a coarse scale (with smooth blocks) and responsible for the reconstruction of a nearly constant image with value close to 120 (the mean value of natural image dataset). This latent channel costs extremely small bitrate but reaches PSNR of 13dB. We tried remove this first channel, the progressive reconstruction with the rest leading 7 channels only leads to PSNR around 16dB, instead of 26dB shown in the figure.
F MORE ABLATION STUDIES
Local self-attention To see if local self-attention is the most important component in transformers, we replace it by depthwise separable convolution block15 (Han et al., 2021; El-Nouby et al., 2021), which performs similar spatial feature aggregation as self-attention, while keeping all other components the same as in the SwinT. We found this change only leads to minor degradation in RD. This suggests other components in transformers such as MLPs and skip connections may also play a big role, other than just self-attention, for the leading performance in our work and many other tasks (Dong et al., 2021).
Small depths Upon investigating the mean attention distance as shown in Figure 22, we find the last block in each of the last two encoder stages has about half of its attention heads degenerate to attending to fixed nearly pixels. This suggests redundant transformer blocks at that stage, so we remove those two blocks, i.e. from depths [2, 2, 6, 2] to [2, 2, 5, 1]. The resulting SwinT-Hyperprior has even less parameters (20.6M) than Conv-Hyperprior (21.4M) while with almost no RD loss compared to the larger model. We expect more hyperparameter search will identify models with better RD-complexity trade-off than we currently show in this work.
Deeper Conv encoder Deeper models are usually more expressive (Raghu et al., 2017) and the state-of-the-art Conv-based compression models typically use much deeper layers than the encoder in the original Hyperprior model (Ballé et al., 2018). As a sanity check on whether deeper convolutional transforms can outperform SwinT-based encoder transforms with 12 blocks, we take an existing design (Chen et al., 2021) with residual blocks and attention (sigmoid gating) layers, which has over 50 conv layers in either encoder or decoder, and more parameters than conv baseline. It indeed improves the RD in lower bitrate, but still worse than SwinT-Hyperprior, and gets much worse in higher bitrate. This is probably the reason that compression models based on this type of transforms did not report results at higher bitrates.
15Conv1×1-LayerNorm-ReLU-DSConv3×3-LayerNorm-ReLU-Conv1×1-LayerNorm-ReLU | 1. What is the focus of the paper regarding image and video compression networks?
2. What are the strengths of the proposed approach, particularly in terms of performance and computational complexity?
3. What are the weaknesses of the paper regarding its analysis and organization?
4. How does the reviewer assess the significance of the proposed transformer-based framework?
5. Are there any concerns or questions regarding the comparison between convolution and transformers? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposed to replace the typical cnn-based transform in image and video compression networks by Swin-transform. Experiments show its better performance and lower computational complexity over cnn-basd methods. Some extensive analysis are conducted to explore the differences between convolution and transformers.
Review
The proposed transformer-based framework shows good performance, proving that the transformer is a good replacement for typical convolution for compression tasks.
The experiments are rich, and enough evidences from many different aspects show that the transformer is just better than custom convolution.
However, a more important question is why transformer is better for image compression network, there is a lack of good organization and dig-in analysis over these many experiment results. |
ICLR | Title
Transformer-based Transform Coding
Abstract
Neural data compression based on nonlinear transform coding has made great progress over the last few years, mainly due to improvements in prior models, quantization methods and nonlinear transforms. A general trend in many recent works pushing the limit of rate-distortion performance is to use ever more expensive prior models that can lead to prohibitively slow decoding. Instead, we focus on more expressive transforms that result in a better rate-distortioncomputation trade-off. Specifically, we show that nonlinear transforms built on Swin-transformers can achieve better compression efficiency than transforms built on convolutional neural networks (ConvNets), while requiring fewer parameters and shorter decoding time. Paired with a compute-efficient Channel-wise AutoRegressive Model prior, our SwinT-ChARM model outperforms VTM-12.1 by 3.68% in BD-rate on Kodak with comparable decoding speed. In P-frame video compression setting, we are able to outperform the popular ConvNet-based scalespace-flow model by 12.35% in BD-rate on UVG. We provide model scaling studies to verify the computational efficiency of the proposed solutions and conduct several analyses to reveal the source of coding gain of transformers over ConvNets, including better spatial decorrelation, flexible effective receptive field, and more localized response of latent pixels during progressive decoding.
1 INTRODUCTION
Transform coding (Goyal, 2001) is the dominant paradigm for compression of multi-media signals, and serves as the technical foundation for many successful coding standards such as JPEG, AAC, and HEVC/VVC. Codecs based on transform coding divide the task of lossy compression into three modularized components: transform, quantization, and entropy coding. All three components can be enhanced by deep neural networks: autoencoder networks are adopted as flexible nonlinear transforms, deep generative models are used as powerful learnable entropy models, and various differentiable quantization schemes are proposed to aid end-to-end training. Thanks to these advancements, we have seen rapid progress in the domain of image and video compression. Particularly, the hyperprior line of work (Ballé et al., 2018; Minnen et al., 2018; Lee et al., 2019; Agustsson et al., 2020; Minnen & Singh, 2020) has led to steady progress of neural compression performance over the past two years, reaching or even surpassing state-of-the-art traditional codecs. For example, in image compression, BPG444 was surpassed by a neural codec in 2018 (Minnen et al., 2018), and (Cheng et al., 2020; Xie et al., 2021; Ma et al., 2021; Guo et al., 2021; Wu et al., 2020) have claimed on-par or better performance than VTM (a test model of the state-of-the-art non-learned VVC standard).
One general trend in the advancement of neural image compression schemes is to develop ever more expressive yet expensive prior models based on spatial context. However, the rate-distortion improvement from context based prior modeling often comes with a hefty price tag1 in terms of decoding complexity. Noteably, all existing works that claimed on-par or better performance than VTM (Cheng et al., 2020; Xie et al., 2021; Ma et al., 2021; Guo et al., 2021; Wu et al., 2020) rely on slow and expensive spatial context based prior models. ∗Equal contribution. †Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc. 1In the extreme case when a latent-pixel-level spatial autoregressive prior is used, decoding of a single 512x768 image requires no less than 1536 interleaved executions of prior model inference and entropy decoding (assuming the latent is downsampled by a factor of 16x16).
The development of nonlinear transforms, on the other hand, are largely overlooked. This leads us to the following questions: can we achieve the same performance as that of expensive prior models by designing a more expressive transform together with simple prior models? And if so, how much more complexity in the transform is required?
Interestingly, we show that by leveraging and adapting the recent development of vision transformers, not only can we build neural codecs with simple prior models that can outperform ones built on expensive spatial autoregressive priors, but do so with smaller transform complexity compared to its convolutional counterparts, attaining a strictly better ratedistortion-complexity trade-off. As can be seen in Figure 1, our proposed neural image codec SwinT-ChARM can outperform VTM-12.1 at comparable decoding time, which, to the best of our knowledge, is a first in the neural compression literature.
As main contributions, we 1) extend SwinTransformer (Liu et al., 2021) to a decoder setting and build Swin-transformer based neural image codecs that attain better rate-distortion performance with lower complexity compared with existing solutions, 2) verify its effectiveness in video compression by enhancing scalespace-flow, a popular neural P-frame codec,
and 3) conduct extensive analysis and ablation study to explore differences between convolution and transformers, and investigate potential source of coding gain.
2 BACKGROUND & RELATED WORK
Conv-Hyperprior The seminal hyperprior architecture (Ballé et al., 2018; Minnen et al., 2018) is a two-level hierarchical variational autoencoder, consisting of a pair of encoder/decoder ga, gs, and a pair of hyper-encoder/hyper-decoder ha, hs. Given an input image x, a pair of latent y = ga(x) and hyper-latent z = ha(y) is computed. The quantized hyper-latent ẑ = Q(z) is modeled and entropycoded with a learned factorized prior. The latent y is modeled with a factorized Gaussian distribution p(y|ẑ) = N (µ, diag(σ)) whose parameter is given by the hyper-decoder (µ,σ) = hs(ẑ). The quantized version of the latent ŷ = Q(y−µ)+µ is then entropy coded and passed through decoder gs to derive reconstructed image x̂ = gs(ŷ). The tranforms ga, gs, ha, hs are all parameterized as ConvNets (for details, see Appendix A.1).
Conv-ChARM (Minnen & Singh, 2020) extends the baseline hyperprior architecture with a channel-wise auto-regressive model (ChARM)2, in which latent y is split along channel dimension into S groups (denoted as y1, . . . ,yS), and the Gaussian prior p(ys|ẑ, ŷ<s) is made autoregressive across groups where the mean/scale of ys depends on quantized latent in the previous groups ŷ<s. In practice, S = 10 provides a good balance of performance and complexity and is adopted here.
Spatial AR models Most of recent performance advancements of neural image compression is driven by the use of spatial auto-regressive/context models. Variants include causal global prediction (Guo et al., 2021), 3D context (Ma et al., 2021), block-level context (Wu et al., 2020), nonlocal context (Li et al., 2020; Qian et al., 2021). One common issue with these designs is that decoding cannot be parallelized along spatial dimensions, leading to impractical3decoding latency, especially for large resolution images.
2For details refer to Figure 11 and 12 in the Appendix. 3It is reported in (Wu et al., 2020)(Table I) that decoding time of spatial autoregressive models on a 512x768
image range from 2.6s to more than half a minute, depending on the specific designs. Also see Figure 1.
ConvNet-based transforms While the design space of prior models is extensively explored, nonlinear transforms, as an important component, have received less attention. A standard convolution encoder-decoder with GDN (Ballé et al., 2016; 2017) as activation is widely adopted in the literature. Later works introduce new transform designs, such as residual blocks with smaller kernels (Cheng et al., 2020), nonlocal (sigmoid gating) layers (Zhou et al., 2019; Chen et al., 2021), invertible neural networks (Xie et al., 2021), and PReLU as an efficient replacement of GDN (Egilmez et al., 2021).
Vision transformers Although many transform networks are proposed, they are still mainly based on ConvNets. Recently transformers (Vaswani et al., 2017) have been introduced to the vision domain and have shown performance competitive with ConvNets in many tasks, e.g. object detection (Carion et al., 2020), classification (Dosovitskiy et al., 2021), image enhancement (Chen et al., 2020), and semantic segmentation (Zheng et al., 2021). Inspired by their success, in this work we explore how vision transformers work as nonlinear transforms for image and video compression.
3 SWIN-TRANSFORMER BASED TRANSFORM CODING
Among the large number of vision transformer variants, we choose Swin Transformer (Liu et al., 2021) (hereafter referred to as SwinT) to build the nonlinear transforms, mainly because of 1) its linear complexity w.r.t. input resolution due to local window attention, and 2) its flexibility in handling varying input resolutions at test time, enabled by relative position bias and hierarchical architecture.
3.1 SWINT ENCODER AND DECODER
The original SwinT is proposed as a vision backbone, i.e. an encoder transform with downsampling. As shown in Figure 2, the SwinT encoder ga contains SwinT blocks interleaved with Patch Merge blocks. The Patch Merge block contains Space-to-Depth (for downsampling), LayerNorm, and Linear layers sequentially. SwinT block performs local self-attention within each non-overlapping window of the feature maps and preserves feature size. Consecutive SwinT blocks at the same feature size shift the window partitioning with respect to the previous block to promote information propagation across nearby windows in the previous block.
We adopt SwinT encoder as the encoder transform ga in our model, and extend it to SwinT decoder gs by reversing the order of blocks in ga, and replacing the Patch Merge block with a Patch Split block, which contains Linear, LayerNorm, Depth-to-Space (for upsampling) layers in sequence. The architectures for hyper transforms ha, hs are similar to ga, gs with different configurations.
4The ChARM architecture (Minnen & Singh, 2020) is detailed in Figure 12 of Appendix A.1.
With these four SwinT transforms, we propose two image compression models, SwinT-Hyperprior and SwinT-ChARM, whose prior and hyper prior models are respectively the same as in ConvHyperprior and Conv-ChARM introduced in Section 2. The full model architectures are shown in Figure 2 and Figure 13.
3.2 EXTENSION TO P-FRAME COMPRESSION
To investigate the effectiveness of SwinT transforms for video compression, we study one popular P-frame compression model called Scale-Space Flow (SSF) (Agustsson et al., 2020). There are three instances of Conv-Hyperprior in SSF, which are respectively for compressing I-frames, scale-space flow and residual. We propose a SwinT variant, referred to as SwinT-SSF, which is obtained by replacing Conv transforms ga, gs in the flow codec and residual codec of SSF with SwinT tranforms. To stabilize training of flow codec in SwinT-SSF, we need to remove all LayerNorm layers and reduce the window size (e.g. from 8 to 4). The baseline SSF model will be referred as Conv-SSF. Even though we build our solution on top of SSF, we believe this general extension can be applied to other ConvNet-based video compression models (Rippel et al., 2021; Hu et al., 2021) as well.
4 EXPERIMENTS AND ANALYSIS
4.1 EXPERIMENT SETUP
Training All image compression models are trained on the CLIC2020 training set. ConvHyperprior and SwinT-Hyperprior are trained with 2M batches. Conv-ChARM and SwinT-ChARM are trained with 3.5M and 3.1M steps. Each batch contains 8 random 256× 256 crops from training images. Training loss L = D + βR is a weighted combination of distortion D and bitrate R, with β being the Lagrangian multiplier steering rate-distortion trade-off. Distortion D is MSE in RGB color space. To cover a wide range of rate and distortion, for each solution, we train 5 models with β ∈ {0.003, 0.001, 0.0003, 0.0001, 0.00003}. The detailed training schedule is in Appendix B.1. For P-frame compression models, we follow the training setup of SSF. Both Conv-SSF and SwinTSSF are trained on Vimeo-90k Dataset (Xue et al., 2019) for 1M steps with learning rate 10−4, batch size of 8, crop size of 256×256, followed by 50K steps of training with learning rate 10−5 and crop size 384× 256. The models are trained with 8 β values 2γ × 10−4 : γ ∈ {0, 1, ..., 7}. We adopt one critical trick to stablize the training from (Jaegle et al., 2021; Meister et al., 2018), i.e. to forward each video sequence twice during one optimization step (mini-batch), once in the original frame order, once in the reversed frame order. Finally we add flow loss5 only between 0 and 200K steps, which we found not critical for stable training but improves the RD.
Evaluation We evaluate image compression models on 4 datasets: Kodak (Kodak, 1999), CLIC2021 testset (CLIC, 2021), Tecnick testset (Asuni & Giachetti, 2014), and JPEG-AI testset (JPEG-AI, 2020). We use BPG and VTM-12.1 to code the images in YUV444 mode, and then calculate PSNR in RGB. For a fair comparison all images are cropped to multiples of 256 to avoid padding for neural codecs.
We evaluated P-frame models on UVG (Mercat et al., 2020) 6 and MCL-JCV (Wang et al., 2016), and compare them with the test model implementation of HEVC, referred to as HEVC (HM), and open source library implementation of HEVC, refered to as HEVC (x265). To align configuration, all video codecs are evaluated in low-delay-P model with a fixed GOP size of 12.
Besides rate-distortion curves, we also evaluate different models using BD-rate (Tan et al., 2016), which represents the average bitrate savings for the same reconstruction quality. For image codecs, BD-rate is computed for each image and then averaged across all images; for video codecs, BD-rate is computed for each video and then averaged across all videos. More details on testset preprocessing, and traditional codecs configurations can be found in Appendix B.2.
5We did not observe RD improvement when applying flow loss to Conv-SSF training. 6We use the original 7 UVG sequences that are commonly used in other works (Agustsson et al., 2020).
4.2 RESULTS
RD and BD-rate for image codecs The RD curves for all compared image codecs evaluated on Kodak are shown in Figure 3a, and the relative rate reduction of each codec compared to VTM-12.1 at a range of PSNR levels is shown in Figure 3b 7.
As can be seen from Figure 3, SwinT transform consistently outperforms its convolutional counterpart; the RD-performance of SwinT-Hyperprior is on-par with Conv-ChARM, despite the simpler prior; SwinT-ChARM outperforms VTM-12.1 across a wide PSNR range. In the Appendix (Figure 28 and Figure 30), we further incorporate the results from existing literature known to us for a complete comparison. Particularly, our Conv-Hyperprior is much better than the results reported in (Minnen et al., 2018) (no context), and Conv-ChARM is on par with (Minnen & Singh, 2020).
In Table 1, we summarize the BD-rate of image codecs across all four dataset with VTM-12.1 as anchor. On average SwinT-ChARM is able to achieve 3.8% rate reduction compared to VTM12.1. The relative gain from Conv-Hyperprior to SwinT-Hyperprior is on-average 12% and that from Conv-ChARM to SwinT-ChARM is on-average 9%. Further gain over VTM-12.1 can be obtained by test-time latent optimization (Campos et al., 2019) or full model instance adaptation (van Rozendaal et al., 2021), which are out of the scope of this work.
7The relative rate-saving curves in Figure 3b is generated by first interpolating the discrete RD points (averaged across the testset) with a cubic spline, and then compare bitrate of different models at fixed PSNR.
8RD plot for the other three datasets can be found in Appendix (Figure 14, Figure 15, and Figure 16)
RD and BD-rate for video codecs For P-frame compression, we evaluated SwinT-SSF on UVG and MCL-JCV, with RD comparison shown in Figure 4. Again, SwinT transform leads to consistently better RD. Table 2 summarizes BD-rate with our reproduced Conv-SSF model as anchor. We can see that SwinT-SSF achieves an average of 11% rate saving over Conv-SSF. Additionally, we show that if SwinT transform is only applied to residual-autoencoder (labeled as SwinT-SSF-Res), it can only get about 4.6% gain, which indicates
that both flow and residual compression benefit from SwinT as encoder and decoder transforms. Note that SwinT-SSF still lags behind HM, suggesting lots of room for improvement in neural video compression. For per-video breakdown of BD-rate, see Figure 18 and Figure 17 in the Appendix.
Decoding complexity We evaluate the decoding complexity of 4 image codecs on 100 images of size 768 × 512 and show the metrics in Table 3, including decoding time, GMACs and GPU peak memory during decoding and total model parameters. The models run with PyTorch 1.9.0 on a workstation with one RTX 2080 Ti GPU. From the table, the inference time of SwinT decoder is less than that of Conv decoder. The entropy decoding time of ChARM prior is about twice than the factorized prior. The total decoding time of SwinT-based models is less than Conv-based models. In ablation study A5, we show a smaller SwinT-Hyperprior with 20.6M parameters has almost the same RD as the SwinT-Hyperprior profiled here. For details on encoding complexity, profiling setup, scaling to image resolution, please refer to Table 4 and Section D.3 in the Appendix.
Scaling behavior To see how the BD-rate varies with model size, we scale SwinTHyperprior and Conv-Hyperprior to be twice or half of the size of the base models (i.e. medium size)9. The result is shown in Figure 5. For both types of models, as we reduce the base model size, there is a sharp drop in performance, while doubling model size only leads to marginal gain. Noticeably, SwinT-Hyperpriorsmall is on-par with Conv-Hyperprior-medium even with half of the parameters, and SwinT transforms in general incur fewer MACs per parameter.
In Figure 1, we further consolidate the decoding latency and scaling behavior study into a single plot and show that SwinT-ChARM runs at comparable speed as VTM-12.1 while achiev-
ing better performance,10 as opposed to state-of-the-art neural codecs with spatial autoregressive prior that decodes orders of magnitude slower.
4.3 ANALYSIS
Latent correlation One of the motivating principles of transform coding is that simple coding can be made more effective in the transform domain than in the original signal space (Goyal, 2001; Ballé et al., 2021). A desirable transform would decorrelate the source signal so that simple scalar quantization and factorized entropy model can be applied without constraining coding performance. In most mature neural compression solutions, uniform scalar quantization is adopted together with a learned factorized or conditionally factorized Gaussian prior distribution. It is critical, then, to effectively factorize and Gaussianize the source distribution so that coding overhead can be minimized.
Specifically, in hyperprior based models (Ballé et al., 2018), ȳ , (y − µ)/σ is modeled as a standard spherical normal vector. The effectiveness of the analysis transform ga can then be evaluated by measuring how much correlation there is among different elements in ȳ. We are particularly interested in measuring the correlation between nearby spatial positions, which are heavily correlated in the source domain for natural images. In Figure 6, we visualize the normalized spatial correlation of ȳ averaged over all latent channels, and compare Conv-Hyperprior with SwinT-Hyperprior at β = 0.001. It can be observed that while both lead to small cross-correlations, Swin-Transformer does a much better job with uniformly smaller correlation values, and the observation is consistent
9detailed model configurations are provided in Appendix A.3. 10Note that it is difficult to fairly compare the decoding time of VTM and neural codecs since they run on different hardware. For more discussion please refer to Appendix D.3. 11The value with index (i, j) corresponds to the normalized cross-correlation of latents at spatial location (w, h) and (w + i, h+ j), averaged across all latent elements of all images on Kodak.
with other β values, which are provided in Figure 20 in the Appendix. This suggests that transformer based transforms incur less redundancy across different spatial latent locations compared with convolutional ones, leading to an overall better rate-distortion trade-off. The larger spatial correlation (and thus redundancy) in Conv-Hyperprior also explains why a compute-heavy spatial auto-regressive model is often needed to improve RD with convolutional based transforms (Minnen et al., 2018; Lee et al., 2019; Ma et al., 2021; Guo et al., 2021; Wu et al., 2020). Figure 6 also reveals that most of the correlation of a latent comes from the four elements surrounding it. This suggests that a checkerboard-based conditional prior model (He et al., 2021) may yield further coding gain.
Effective receptive field Intra prediction in HEVC or AV1 only rely on left and top boarders of the current coding block (Sullivan et al., 2012; Chen et al., 2018), except for intra block copy for screen content (Xu et al., 2016). We would like to see how large the effective receptive field (ERF) (Luo et al., 2017) of SwinT encoders compared to Conv encoders. The theoretical receptive field of the encoders (ga, ha ◦ ga) in SwinT-based codecs is much larger than that of Conv-based codecs. However comparing Figure 7a with 7e and Figure 7b with 7f, the ERF of SwinT encoders after training is even smaller than Conv encoders. When we examine the ERF of the released Swin transformers for classification, detection and segmentation tasks, they are all spanning the whole input image. This contrast suggests that (natural) image compression with rate-distortion objective is a local task, even with transformer-based nonlinear transforms. We further look into P-frame compression models, particularly the ERF of two types of transforms in flow codec and residual codec, as shown in Figure 7d & 7h, and Figure 7c & 7g. Clearly for flow codec, SwinT transform has much larger ERF than the convolution counterpart. For residual codec, the ERF of SwinT transforms is similar to image (I-frame) compression case. This shows of flexibility of SwinT encoders to attend to longer or shorter range depending on the tasks. To get a better picture of the behavior of attention layers in SwinT transforms, we also show the attention distance in each layer in Figure 22.
Progressive decoding The ERF in the previous section shows the behavior of the encoder transforms, here we further investigate the decoder transforms through the lens of progressive decoding (Rippel et al., 2014; Minnen & Singh, 2020; Lu et al., 2021). Initialized with the prior mean, the input to the decoder is progressively updated with the dequantized latent ŷ in terms of coding units, leading to gradually improved reconstruction quality. For the latent with shape (C,H,W ), we consider three types of coding units, i.e. per channel (1, H,W ), per pixel (C, 1, 1), per element (1, 1, 1). The coding units are ordered by the sum of prior std of all elements within each unit. The RD curves of progressive decoding for SwinT-Hyperprior and Conv-Hyperprior are shown in Figure 8a, which closely follow each other when ordered by channel or element, but significantly apart when ordered by pixel (spatial dim). Particularly, we show an extreme case when the half pixels in the latent (masked by checkerboard pattern) are updated with dequantized values, corresponding to the two scatter points in Figure 8a. One visual example (CLIC2021 test) is shown in Figure 8b under
this setup, where we can clearly see SwinT decoder achieves better reconstruction quality than the Conv decoder, mainly in terms of more localized response to a single latent pixel. This is potentially useful for region-of-interest decompression. More visual examples are shown in Figure 26.
4.4 ABLATION STUDY
Relative position bias There are two sources of positional information in SwinT transforms, namely the Space-to-Depth modules and the additive relative position bias (RPB). Even when the RPB is removed, SwinT-Hyperprior still outperforms Conv-Hyperprior across all bitrates, which indicates image compression may not require accurate relative position.
Shifted window The original motivation of shifted window design is to promotes the inter-layer feature propagation across nonoverlapping windows. Image compression performance drops slightly when there is no shifted window at all. This further suggests image compression requires local information.
The details of ablations A3-A5 in Figure 9 can be found in Section F of the appendix.
5 CONCLUSION
In this work we propose Swin transformer based transforms for image and video compression. In the image compression setting, SwinT transform consistently outperforms its convolutional counterpart. Particularly, the proposed SwinT-ChARM model outperforms VTM-12.1 at comparable decoding speed, which, to the best of our knowledge, is the first in learning-based methods. We also show the effectiveness of SwinT transforms when extended to the P-frame compression setting. Compared with convolution transforms, SwinT transforms can spatially decorrelate the latent better, have more flexible receptive field to adapt to tasks that requires either short-range (image) and long-range (motion) information, and better progressive decoding of latent pixels. While pushing the neural image compression to a new level in terms of rate-distortion-computation trade-off, we believe it is only the starting point for developing more efficient transformer-based image and video codecs.
ACKNOWLEDGMENTS
We would like to thank Amir Said for developing entropy coding and great advice on data compression in general. We would also appreciate the helpful discussions from Reza Pourreza and Hoang Le, and draft reviews from Auke Wiggers and Johann Brehmer.
Appendix
Table of Contents
A Models 15
A.1 Convolution baselines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 A.2 Swin-Transformer based compression models . . . . . . . . . . . . . . . . . . . 15 A.3 Model configurations for model size scaling study . . . . . . . . . . . . . . . . 16
B Training and Evaluation 17
B.1 Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 B.2 Traditional codec evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
C BD rate computation 19
C.1 BD rate for image codec . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 C.2 BD rate for video codec . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
D More Results 21
D.1 Image compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 D.2 Video Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 D.3 Coding complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
E Analysis 24
E.1 Spatial correlation of latent . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 E.2 Effective Receptive Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 E.3 Rate distribution across latent channels . . . . . . . . . . . . . . . . . . . . . . 25 E.4 Centered kernel alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 E.5 Progressive decoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
F More ablation studies 26
A MODELS
A.1 CONVOLUTION BASELINES
Conv-Hyperprior and Conv-ChARM The architecture of Conv-Hyperprior and ConvChARM are shown in Figure 10 and Figure 11. For both architecture, our base model (i.e. medium size) has the following hyperparameters: (C1, C2, C3, C4, C5, C6, C7) = (320, 320, 320, 320, 192, 192, 192).
A.2 SWIN-TRANSFORMER BASED COMPRESSION MODELS
SwinT-Hyperprior, SwinT-ChARM For both SwinT-Hyperprior and SwinT-ChARM, we use the same configurations: (wg, wh) = (8, 4), (C1, C2, C3, C4, C5, C6) = (128, 192, 256, 320, 192, 192), (d1, d2, d3, d4, d5, d6) = (2, 2, 6, 2, 5, 1) where C, d, and w are defined in Figure 13 and Figure 2. The head dim is 32 for all attention layers in SwinT-based models.
SwinT-SSF For SwinT transforms used in SSF variant, the first Patch Merge block is with downsample rate of 4 and two other Patch Merge blocks with downsampling rate of 2. Thus the downsampling rate for the encoder is still 16, the same as the image compression models. There are only 3 transformer stages with depths 2, 4, 2. The embedding dim is 96. The number of latent and hyper latent channels are all 192. The window size is 4 for flow codec and 8 for residual codec.
SwinT-SSF-Res This is a variant where only residual autoencoder uses SwinT transforms. Same architecture as the residual autoencoder in SwinT-SSF.
A.3 MODEL CONFIGURATIONS FOR MODEL SIZE SCALING STUDY
A.3.1 SWINT-HYPERPRIOR
Set of model hyperparameters that are common to all experiments: (d1, d2, d3, d4, d5, d6) = (2, 2, 6, 2, 5, 1) (wg, wh) = (8, 4)
SwinT-Hyperprior (small) (C1, C2, C3, C4, C5, C6) = (96, 128, 160, 192, 96, 128)
SwinT-Hyperprior (medium) (C1, C2, C3, C4, C5, C6) = (128, 192, 256, 320, 192, 192)
SwinT-Hyperprior (large) (C1, C2, C3, C4, C5, C6) = (160, 256, 352, 448, 192, 256)
A.3.2 CONV-HYPERPRIOR
Conv-Hyperprior (small) (C1, C2, C3, C4, C5, C6, C7) = (192, 192, 192, 192, 128, 128, 128)
Conv-Hyperprior (medium) (C1, C2, C3, C4, C5, C6, C7) = (320, 320, 320, 320, 192, 192, 192)
Conv-Hyperprior (large) (C1, C2, C3, C4, C5, C6, C7) = (448, 448, 448, 448, 256, 256, 256)
B TRAINING AND EVALUATION
B.1 TRAINING
All image compression models are trained on CLIC2020 training set, which contains both professional and mobile training sets, in total 1,633 high resolution natural images. Conv-Hyperprior and SwinT-Hyperprior are trained with 2M batches. Each batch contains 8 patches of size 256 × 256 randomly cropped from the training images. Learning rate starts at 10−4 and is reduced to 10−5 at 1.8M step.
For Conv-ChARM, we first train a model Conv-ChARM at β = 0.0001 from scratch for 2M steps, with it as the starting point, we continue to train other beta values Conv-ChARM-β, β ∈ B for 1.5M steps. For SwinT-ChARM-β, we load the transform weights from the checkpoint at 2M step of the pretrained SwinT-Hyperprior-β, then finetune the transforms together with the random initialized ChARM prior for 1.1M steps. Learning rate starts at 10−4 and is reduced to 10−5 for the last 100K steps.
Training loss L = D + βR is a weighted combination of distortion D and bitrate R, with β being the Lagrangian multiplier steering rate-distortion trade-off. Distortion D is MSE in RGB color space. To cover a wide range of rate and distortion, for each solution, we train 5 models with β ∈ B = {0.003, 0.001, 0.0003, 0.0001, 0.00003}. Usually we need to train longer for the model with larger bitrates (i.e. smaller β) to converge. Particularly for the results presented in this paper, We train SwinT-Hyperprior-0.00003 for 2.5M steps instead of 2M steps for the other 4 lower bitrates.
For P-frame compression models, we follow the training setup of SSF. Both Conv-SSF and SwinTSSF are trained on Vimeo-90k Dataset (Xue et al., 2019) for 1M steps with learning rate 10−4, batch size of 8, crop size of 256 × 256, followed by 50K steps of training with learning rate 10−5 and crop size12 384×256. The models are trained with 8 β values 2γ×10−4 : γ ∈ {0, 1, ..., 7}. We adopt one critical trick to stablize the training from (Jaegle et al., 2021; Meister et al., 2018), i.e. to forward each video sequence twice during one optimization step (mini-batch), once in the original frame order, once in the reversed frame order. When this trick is used, we set the batch size to be 4 instead of 8. Finally we add flow loss only between 0 and 200K steps, which we found not critical for stable training but helps improve the RD.
For all model training, Adam optimizer is used without weighted decay. Training for 2M steps takes about 10 days and 14 days respectively for Conv-Hyperprior and SwinT-Hyperprior on a single Nvidia V100 GPU. Total training time is about 7.5 days on a single Nvidia V100 GPU.
For all models, we use mixed quantization during training (Minnen & Singh, 2020), i.e. adding uniform noise to the continuous latent before passing to the prior model, subtracting prior mean from the continuous latent followed by rounding before passing to the decoder transform.
12We did not use the crop size 384 × 384 during the second stage as in the original paper because the resolution of Vimeo dataset is 448 × 256. We found in our case increasing crop size from 256 × 256 to 384× 256 in the second stage does not improve RD.
B.2 TRADITIONAL CODEC EVALUATION
In this section, we provide evaluation script used to generate results for traditional codecs.
B.2.1 IMAGE CODECS
VTM-12.1: VTM-12.1 software is built from https://vcgit.hhi.fraunhofer.de/ jvet/VVCSoftware_VTM/-/tags/VTM-12.1 and we use the script from CompressAI (https://github.com/InterDigitalInc/CompressAI/tree/efc69ea24) for dataset evaluation. Specifically, the following command is issued to gather VTM-12.1 image compression evaluation results:
python -m compressai.utils.bench vtm [path to image folder] -c [path to VVCSoftware_VTM folder]/cfg/encoder_intra_vtm.cfg -b [path to VVCSoftware_VTM folder]/bin -q 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38, 40
BPG: BPG software is obtained from https://bellard.org/bpg/ and the following commands are used for encoding and decoding.
bpgenc -e x265 -q [0 to 51] -f 444 -o [encoded heic file] [original png file] bpgdec -o [decoded png file] [encoded heic file]
B.2.2 VIDEO CODECS
HEVC (x265)
ffmpeg -y -pix_fmt yuv420p -s [resolution] -r [frame-rate] -i [input yuv420 raw video] -c:v libx265 -preset medium -crf [9, 12, 15, 18, 21, 24, 27, 30] -tune zerolatency -x265-params "keyint=12:min-keyint=12:verbose=1" [output mkv file path]
AVC (x264)
ffmpeg -y -pix_fmt yuv420p -s [resolution] -r [frame-rate] -i [input yuv420 raw video] -c:v libx264 -preset medium -crf [9, 12, 15, 18, 21, 24, 27, 30] -tune zerolatency -x264-params "keyint=12:min-keyint=12:verbose=1" [output mkv file path]
HEVC (HM)
[Path to HM folder]/bin/TAppEncoderStatic -c [Path to HM folder]/cfg/encoder_lowdelay_P_main.cfg -i [input yuv raw video] --InputBitDepth=8 -wdt [width] -hgt [height] -fr [frame-rate] -f [number of frames] -o [output yuv video] -b [encoded bitstream bin file] -ip 12 -q [12, 17, 22, 27, 32, 37, 42]
C BD RATE COMPUTATION
def Bjontegaard_Delta_Rate( # rate and psnr in ascending order rate_ref, psnr_ref, # reference rate_new, psnr_new, # new result
): min_psnr = max(psnr_ref[0], psnr_new[0], 30)
max_psnr = min(psnr_ref[-1], psnr_new[-1], 44)
log_rate_ref = log(rate_ref) log_rate_new = log(rate_new)
spline_ref = scipy.interpolate.CubicSpline( psnr_ref, log_rate_ref, bc_type=’not-a-knot’, extrapolate=True,
) spline_new = scipy.interpolate.CubicSpline(
psnr_new, log_rate_new, bc_type=’not-a-knot’, extrapolate=True,
)
delta_log_rate = ( spline_new.integrate(min_psnr, max_psnr) - spline_ref.integrate(min_psnr, max_psnr)
)
delta_rate = exp(delta_log_rate / (max_psnr - min_psnr))
return 100 * (delta_rate - 1)
C.1 BD RATE FOR IMAGE CODEC
# Evaluate BD-rate on an image dataset bd_rates = list() for image in image_dataset:
# evaluate rate and psnr on reference and new codec # for this image with different qualities rate_ref, psnr_ref = ReferenceCodec(image, qp=[...]) rate_new, psnr_new = NewImageCodec(image, beta=[...]) bd_rates.append(
Bjontegaard_Delta_Rate( rate_ref, psnr_ref, rate_new, psnr_new,
) )
# BD is computed per image and then averaged bd_rate = bd_rates.mean()
C.2 BD RATE FOR VIDEO CODEC
# Evaluate BD-rate on a video dataset bd_rates = list() for video in video_dataset:
# evaluate rate and psnr on reference and new codec # for this video with different qualities rate_ref, psnr_ref = ReferenceCodec(video, qp=[...]) rate_new, psnr_new = NewVideoCodec(video, beta=[...]) bd_rates.append(
Bjontegaard_Delta_Rate( rate_ref, psnr_ref, rate_new, psnr_new,
) )
# BD is computed per video and then averaged bd_rate = bd_rates.mean()
D MORE RESULTS
D.1 IMAGE COMPRESSION
Additional rate-distortion results on CLIC2021, Tecnick, and JPEG-AI are provided in Figure 14, Figure 15, and Figure 16.
For a complete comparison of results from existing literatures, we provide a summary RD plot of all neural image codec solutions known to us in Figure 28 on Kodak. In Figure 29 and Figure 30, we plot the percentage rate saving with BPG444 and VTM-12.1 as reference, respectively.
D.2 VIDEO COMPRESSION
In Figure 17 and Figure 18, we provide performance comparison of Conv-SSF, SwinT-SSF, HEVC (x265), and AVC (x264) with per-video breakdown.
D.3 CODING COMPLEXITY
We evaluate the decoding complexity of all neural image codecs in terms of the time for network inference and entropy coding, peak GPU memory, model size, etc. We select 100 high resolution images from CLIC testset and center crop them to three resolutions (768× 512, 1280× 768, 1792×
1024) to see how those metrics scale with image size. Batch size is one for all model inference. We run the experiment on a local workstation with one RTX 2080 Ti GPU, with PyTorch 1.9.0 and Cuda toolkit 11.1. For bit-exact entropy coding, we need to use deterministic13 convolution. The neural networks run on the single GPU and entropy coding runs on CPU with 8 threads. We follow the standard protocols to measure the inference time and peak memory of neural nets, such as GPU warm up and synchronization. File open/close is excluded from coding time measurement. MACs, GPU peak memory, and model parameter count are profiled using get model profile function of deepspeed profiler14.
We show more details on the coding complexity of neural image codecs in Figure 19, particularly the linear scaling to image resolution of both SwinT-based and Conv-based models. The break-down of encoding complexity is shown in Table 4.
For completeness, we also report the profiling for CPU coding time in Table 5 and Table 6. The evaluation setup is the same as the GPU profiling case, except models are run on the CPU instead (same host machine with Intel(R) Xeon(R) W-2123 CPU @ 3.60GHz).
Table 7 reports encoding and decoding time of VTM-12.1 under different quantization parameters (QPs), evaluated on an Intel Core i9-9940 CPU @ 3.30GHz, averaged over 24 Kodak images. As can be seen from the table, decoding time of VTM-12.1 is a function of reconstruction quality, where longer decoding time is observed for higher quality reconstruction. In Figure 1, the reported VTM12.1 decoding speed corresponds to a QP value of 28, where the bpp value is similar to that obtained by models trained with β = 0.001. It is worth pointing out that VTM-12.1 encoding process is much slower, ranging anywhere from 1 to 5 minutes per image, whereas neural codec runs much faster.
13https://pytorch.org/docs/1.9.0/generated/torch.use_deterministic_ algorithms.html?highlight=deterministic#torch.use_deterministic_ algorithms
14https://www.deepspeed.ai/tutorials/flops-profiler/ #usage-outside-the-deepspeed-runtime
E ANALYSIS
E.1 SPATIAL CORRELATION OF LATENT
We visualize the spatial correlation map for Conv-Hyperprior and SwinT-Hyperprior at different β in Figure 20.
E.2 EFFECTIVE RECEPTIVE FIELD
See Figure 21 for the effective receptive field for the composed encoding tranforms ha ◦ ga and Figure 22 for the mean attention distance visualization of each head within each transformer layer.
E.3 RATE DISTRIBUTION ACROSS LATENT CHANNELS
It is generally believed that ConvNets learn to extract various features and store them in each channel of the activations. Here we look into the features in the latent channels which are to be quantized and entropy coded to bitstreams. Particularly we order the total bitrate of each channel averaged over Kodak dataset (24 768× 256 images). The result is shown in Figure 23. We find an interesting phenomenon across models under different bitrates: there is a cutoff point of the bitrate-vs-channel curve where the bitrate suddenly drops to zero, which manifest the rate constraint in the loss function. As expected, the cutoff index decreases for the model trained for smaller bitrate (larger β).
E.4 CENTERED KERNEL ALIGNMENT
To investigate the difference or similarity between latent features of Conv-based and SwinT-based models, we resort to a commonly used tool in representation learning called centered kernel alignment (CKA) (Kornblith et al., 2019). We evaluate CKA between each of the Conv latent channel and SwinT latent channel (both models are trained under the same β) over the 24 Kodak images. There are 320 channels for both Conv latent and SwinT latent, resulting a 320 × 320 CKA matrix. The result is shown in Figure 24. The latent channel is ordered by the averaged bitrate of each channel over Kodak images (same as in Section E.3). The CKA matrix has clear block structure, where high similarity region corresponds to the latent channels before the bitrate cutoff in the rate distribution curve (Figure 23).
Identification of SwinT and Conv latent channels with CKA Within the block of high similarity (from the CKA matrix), we identify the ‘less’ similar SwinT latent channels with lowest CKA values between this SwinT channel and all other Conv latent channels. For each of the identified SwinT channel, we find the Conv latent channel with the largest CKA value between the two. This way, we are able to identify latent channels of two different models with high similarity. We show the identified top 8 channels in Figure 25. The channels are indeed highly similar, up to a sign flip, even through the two model architectures are quite different. This empirical result is relevant to the literature on the identifiability of generative models (Khemakhem et al., 2020).
E.5 PROGRESSIVE DECODING
More visual examples of reconstructions of checkerboard (spatially) masked latent are provided in Figure 26.
Channel-wise progressive decoding Here we visualize the behavior of channel-wise progressive decoding of Conv and SwinT models. For both models, we order the latent channels with a heuristic importance metric: the reduction of distortion over the increase of rate if one channel is included for decoding. We start with the order of bitrate per channel, we pass the leading channels of bitrate order (zero out all rest channels) to the decoder to obtain reconstruction and calculate distortion. We plot the top 8 channels following this importance order. For each channel, we show 6 maps from top to bottom: latent values, mean prediction from the hyper decoder, standard deviation prediction from the hyper decoder, the bitmap, the reconstruction with only current channel, the reconstruction with up to current channel (all leading channels). The result is shown in Figure 27. For Conv models, usually the top 3 important channels are responsible for lightness, and two color components (blueyellow, red-green), similar to the LAB colorspace. The rest of the latent channels are responsible for adding details like texture and edges. For the Swint models, at low bitrate, there is one significantly different channel (the first column in Figure 27b), which is in a coarse scale (with smooth blocks) and responsible for the reconstruction of a nearly constant image with value close to 120 (the mean value of natural image dataset). This latent channel costs extremely small bitrate but reaches PSNR of 13dB. We tried remove this first channel, the progressive reconstruction with the rest leading 7 channels only leads to PSNR around 16dB, instead of 26dB shown in the figure.
F MORE ABLATION STUDIES
Local self-attention To see if local self-attention is the most important component in transformers, we replace it by depthwise separable convolution block15 (Han et al., 2021; El-Nouby et al., 2021), which performs similar spatial feature aggregation as self-attention, while keeping all other components the same as in the SwinT. We found this change only leads to minor degradation in RD. This suggests other components in transformers such as MLPs and skip connections may also play a big role, other than just self-attention, for the leading performance in our work and many other tasks (Dong et al., 2021).
Small depths Upon investigating the mean attention distance as shown in Figure 22, we find the last block in each of the last two encoder stages has about half of its attention heads degenerate to attending to fixed nearly pixels. This suggests redundant transformer blocks at that stage, so we remove those two blocks, i.e. from depths [2, 2, 6, 2] to [2, 2, 5, 1]. The resulting SwinT-Hyperprior has even less parameters (20.6M) than Conv-Hyperprior (21.4M) while with almost no RD loss compared to the larger model. We expect more hyperparameter search will identify models with better RD-complexity trade-off than we currently show in this work.
Deeper Conv encoder Deeper models are usually more expressive (Raghu et al., 2017) and the state-of-the-art Conv-based compression models typically use much deeper layers than the encoder in the original Hyperprior model (Ballé et al., 2018). As a sanity check on whether deeper convolutional transforms can outperform SwinT-based encoder transforms with 12 blocks, we take an existing design (Chen et al., 2021) with residual blocks and attention (sigmoid gating) layers, which has over 50 conv layers in either encoder or decoder, and more parameters than conv baseline. It indeed improves the RD in lower bitrate, but still worse than SwinT-Hyperprior, and gets much worse in higher bitrate. This is probably the reason that compression models based on this type of transforms did not report results at higher bitrates.
15Conv1×1-LayerNorm-ReLU-DSConv3×3-LayerNorm-ReLU-Conv1×1-LayerNorm-ReLU | 1. What are the main contributions and strengths of the paper regarding Swin Transformer's performance in image and video compression?
2. What are the weaknesses or areas for improvement in the paper, particularly regarding the encoding speed and receptive field analysis?
3. How do the results of the paper compare to prior works in terms of rate distortion gains and decoding times?
4. Can the authors provide more insights into the latent analysis and ablation studies conducted in the paper?
5. Are there any potential applications or future research directions stemming from the findings of this paper on Swin Transformer's performance in image and video compression? | Summary Of The Paper
Review | Summary Of The Paper
In this paper, the authors show that applying the Swin Transformer to both neural and image video compression show significant rate distortion gains over Conv Transformer methods and provide an analysis of latents and ablasion studies to understand the difference in the two methods.
Review
Strengths: The authors show strong results for SwinT networks across two entropy modeling networks (channel-wise autogressive and hyper prior) for image compression and scale-space architectures for video compression at much faster decoding times.
SwinT's high relative performance also appears to be more resilient to scaling the network to smaller capacities, as compared to a Conv model. The small SwinT is half the BD-rate loss of the Conv equivalent at under half the speed.
Latent analysis for SwinT shows a uniformly smaller correlation than the Conv models. It is a noteworthy result and leads to future research questions on if imposing that form leads to better image compression or if it is a result of a naturally better method. This question would also hold for the receptive field analysis, where SwinT's receptive fields are more rectilinear than the Conv models, which may be showing that Conv models are getting mostly unneeded latents influencing the reconstructions.
Overall, this paper includes an abundance of strong results, across multiple architectures and datasets for image and video compression. The appendices have a plethora of detail about training, architecture design and results that should make this paper very easy to reproduce and verify.
Weaknesses: In Figure 19, SwinT models are shown to be much slower at encoding than Conv models while much faster at decoding. Is there a possible shift of compute going on in the SwinT networks that aren't being taken advantage of in the Conv models?
The gap at low bitrates is amazing, the SwinT models heavily out perform the Conv models, but that is gain disappears at higher quality / rates, any intuition on why that is?
The image and video codec baselines were used in medium and with low/zero latency optimizations enabled. This will generally put classic codecs at a lower R/D in comparisons. This would generally be appropriate if you're are comparing with a method that has a very fast encode, and in Figure 19, SwinT models are strictly slower at encoding than Conv models.
In Table 3 the SwinT entropy transform models h_s are slower than their Conv counterparts, is there an explanation for that? Is there performance to gain by using the Conv variants there and using SwinT exclusively for the image synthesis layers g_s?
Suggestions / Comments: 4.1 Training = "3.5M and 3.1M batches" => "3.5M and 3.1M steps" is a clearer description for describing how many training steps the network sees.
Footnote 11 in the Appendix is repeated three paragraphs above, feel free to remove one of them: "We did not use the crop size 384 × 384 during the second stage as in the original paper because the resolution of Vimeo dataset is 448×256. We found in our case increasing crop size from 256×256 to 384 × 256 in the second stage does not improve RD."
Do you have any timing for the SwinT vs. Conv models without a GPU? Do SwinT models have a runtime advantage with using a larger working set (max GPU memory) of very fast GPU memory vs the Conv models that don't have as high of a max memory usage? |
ICLR | Title
On a Built-in Conflict between Deep Learning and Systematic Generalization
Abstract
Out-of-distribution or systematic generalization is a desirable property that most deep learning algorithms lack. In this paper, we hypothesize that internal function sharing is one of the reasons to weaken systematic generalization in deep learning for classification tasks. Under equivalent prediction, a model partitions an input space into multiple parts separated by boundaries. The function sharing prefers to reuse boundaries, leading to fewer parts for new outputs, which conflicts with systematic generalization. We show such phenomena in standard deep learning models, such as fully connected, convolutional, residual networks, LSTMs, and (Vision) Transformers. We hope this study provides novel insights and forms a basis for new research directions to improve systematic generalization. Source codes are available in the supplementary material.
N/A
Out-of-distribution or systematic generalization is a desirable property that most deep learning algorithms lack. In this paper, we hypothesize that internal function sharing is one of the reasons to weaken systematic generalization in deep learning for classification tasks. Under equivalent prediction, a model partitions an input space into multiple parts separated by boundaries. The function sharing prefers to reuse boundaries, leading to fewer parts for new outputs, which conflicts with systematic generalization. We show such phenomena in standard deep learning models, such as fully connected, convolutional, residual networks, LSTMs, and (Vision) Transformers. We hope this study provides novel insights and forms a basis for new research directions to improve systematic generalization. Source codes are available in the supplementary material.
1 INTRODUCTION
A fundamental property of artificial intelligence is generalization, where a trained model appropriately processes unseen test samples. Many problems adopt the i.i.d. assumption. On the other hand, out-of-distribution (o.o.d.) or systematic generalization (Fodor & Pylyshyn, 1988; Lake & Baroni, 2018) requires that training and test distributions are disjoint so that test samples have zero probability in training distribution. Systematic generalization is crucial for human learning and supports efficient data use and creativity. Therefore, machines are encouraged to acquire such generalization ability to achieve human-like intelligence.
Systematic generalization usually requires that a sample has multiple explanatory factors of variation (Bengio et al., 2013), and the generalization is enabled by producing an unseen combination of seen factor values. For example, models trained on blue rectangles and green triangles predict blue triangles. We adopt factors mainly in designing experiments and developing intuitions. It helps experiments because new outputs are only related to function sharing between factors (Section 3). So we limit our claim to the cases for recombination of factors.
One stream of artificial intelligence is Connectionism (Feldman & Ballard, 1982; Rumelhart et al., 1986), which uses many simple neuron-like units richly interconnected and processed in parallel. It was criticized that Connectionist models do not support systematic generalization well (Fodor & Pylyshyn, 1988; Marcus, 1998). Deep learning (LeCun et al., 2015) originates from Connectionism, and various techniques have enabled multiple-layer modelings and improved performance on i.i.d. problems in recent years. Also, specific algorithms have been proposed to equip deep learning with systematic generalization ability (Russin et al., 2019; Lake, 2019). However, less discussion has been made on why standard deep learning models do not achieve systematic generalization.
This paper addresses the above question by looking into a built-in characteristic of deep learning. A node (or a set of nodes) is a function of the network input. By function sharing, we mean nodes in one layer share inputs from nodes in the previous layer. It may also be called activation sharing or feature sharing. We hypothesize that function sharing is one of the reasons to prevent systematic generalization in deep learning. Under equivalent prediction, a classification network partitions an input space into multiple parts separated by boundaries. Function sharing prefers to reuse boundaries and avoid redundant ones for training predictions. It leads to fewer parts for new outputs and weakens systematic generalization. The nearest neighbor classifier is an analogy for this effect because it predicts a training sample output for any test sample, so it does not have a part for new outputs. We also discuss that function sharing belongs naturally to deep learning (Section 4).
Figure 2 has an intuitive example explaining why the conflict happens. A test sample (in orange) equals a set of training samples (+/+) on
the first output. Then they are also equal on the second output if the function is reused (see caption). Therefore, they are equal in all the outputs, which conflicts with systematic generalization. More generally, the two boundaries jointly partition an input space into multiple parts, and deep learning prefers (a) because it has fewer parts than (b). Figure 3 has a visualized example. It is similar to the example in Figure 2a. The two functions are shared in the top-right region, and few inputs are predicted as the new combination. Figure 1 has a simplified plot for a result in the experiment section. As the degree of function sharing increases (more sharing layers), the accuracy of the test dataset or generalization capacity decreases accordingly. It supports that function sharing weakens systematic generalization. Please refer to Section 3 for more details.
This paper contributes to uncovering a built-in conflict between deep learning and systematic generalization. We hope this study provides novel insights, forms a basis for new research directions, and helps improve machine intelligence to the human level.
2 A BUILT-IN CONFLICT
We hypothesize a built-in conflict between the function sharing in deep learning and systematic generalization. We cover the definition, assumptions, and derivations of the conflict.
2.1 SYSTEMATIC GENERALIZATION
We have input set X and output set Y . Y is decided by K factors Y1, . . . , YK . The training set Dtrain has input Xtrain and output Ytrain. The test set Dtest has input Xtest and output Ytest. In systematic generalization, Ytrain and Ytest are disjoint. However, the values for each factor i are included in the training output. A model f maps input X to the prediction of output f(X). A model enables systematic generalization if it correctly predicts the ground-truth test outputs. Definition 1 (Systematic generalization). The dataset requires that any test label is not a training label, but each factor of a test label is seen in a training label.
∀(x, y) ∈ Dtest : y ̸∈ Ytrain, ∀i = 1, . . . ,K, ∃y′ ∈ Ytrain : y′i = yi A function f enables systematic generalization if ∀(x, y) ∈ Dtest : y = f(x).
When the model is well trained, we assume it correctly predicts training samples. Assumption 1 (Correct training prediction). ∀(x, y) ∈ Dtrain : y = f(x).
2.2 FUNCTION SHARING
In Figure 2, deep learning prefers the model in (a) over the model in (b). We denote the models as f and g (mapping from input to output), respectively. We see that g has one more region for the new factor combination between the magenta and cyan boundaries. This region exists because the magenta boundary in g splits the +/+ region in f into two parts. It means the partition under g refines that under f , but not vice versa.
We assume a property of function sharing. For general models f and g, deep learning prefers f over g more or equally if the partition under g is a refinement of that under f . Equivalently, if two inputs have an identical prediction in g, their predictions are still equal in f . It means an equal prediction in g implies that in f . Assumption 2 (Function sharing). Deep learning prefers f over g more or equally if
∀xa, xb ∈ X : g(xa) = g(xb) =⇒ f(xa) = f(xb)
Note that it is a bias in the learning process. In case we need f to be strictly more preferred over g, we can additionally show that identical prediction in f does not imply that in g.
We consider what causes the function sharing. Intuitively, the preference comes from greedy optimization and the task requirement to learn a complicated model. A model learns to split or partition inputs by different predictions. Some splits (can be a factor) might be easier to learn than others. For example, learning to split inputs by color is more straightforward than by shape. Then the greedy optimization learns it quickly and reuses its intermediate functions to learn other splits. It is not likely that multiple splits (or factors) are learned equally fast during the whole training process.
The mechanism of reusing function is similar to auxiliary tasks. The parameters are updated to address a complicated main task while quickly learning and keeping the prediction ability for more straightforward auxiliary tasks. A more common but less similar example is pre-training. Pretrained modules, such as image feature extractors or word embeddings, share the internal functions with the main target tasks. However, the pre-training and the main task training do not happen simultaneously. We will discuss more insights in Section 4 and Appendix C.
2.3 THE CONFLICT
We derive propositions for proving the theorem and explaining the reasons for the phenomena. In practice, the assumptions may not hold strictly, and the effects come more from the soft biases of the conclusions. The proofs are in Appendix B.
Assumption 2 leads a model to predict training outputs for any input.
Proposition 1 (Seen prediction). From Assumption 2, ∀x ∈ X : f(x) ∈ f(Xtrain).
Informally, suppose there is a non-empty set of all inputs that a function f does not map to training outputs f(Xtrain). In that case, we can design another function f ′ that predicts a training output for these inputs and keeps predictions for other inputs. Then both f and f ′ equivalently distinguish training outputs. With Assumption 2, f ′ is preferred, hence ∀x ∈ X : f(x) ∈ f(Xtrain). It does not apply Assumption 1, indicating that the phenomena may happen before a model is well trained.
If a model performs well for the training set, it predicts training ground-truth output for any input.
Proposition 2 (Seen label). From Assumption 1 and Proposition 1, ∀x ∈ X : f(x) ∈ Ytrain.
It says that the prediction is not any new output. It is a stronger argument than avoiding a particular new output for each input in systematic generalization. It explains that prediction is incorrect because any new output is resisted. We evaluate it in Section 3.
We then look at the conflict. For any test sample (x, y), the definition of systematic generalization requires that the output y is not a training output Ytrain. However, this contradicts Proposition 2.
Theorem 1 (The conflict). From Definition 1 and Proposition 2, ∀(x, y) ∈ Dtest : y ̸= f(x).
We covered systematic generalization definition and function sharing assumption. We then derived propositions and a theorem of the conflict.
3 EXPERIMENTS
We run experiments to show that function sharing reduces the ability of systematic generalization in deep learning. We not only compare sharing or not but also adjust the degree of sharing. We focus on the cases where new outputs are unseen combinations of seen factor values. We cover different standard deep neural network models. The details of networks and experiments can be found in Appendix D. More experiments with natural inputs are in Appendix A. The results of zero-shot learning datasets are in Appendix E, where factors are mainly related to the input locality. We look at the experiment settings and results.
3.1 SETTINGS
Data preparation We construct a dataset from two ten-class classification datasets. The training data are generated from the original training dataset. We first chose output y and chose x based on it. y1 is chosen from all possible labels. y2 is chosen from five classes {y1, y1 + 1, . . . , y1 + 4} (we use modular for labels). The test data are generated from the original test dataset. y1 is chosen in the same way as in training. y2 is chosen from the other classes {y1+5, y1+6, . . . , y1+9}. In this design, training and test label combinations are mutually exclusive, but test labels for each output factor are seen in training. Any factor label appears evenly in training and test combinations. x1 and x2 are chosen conditioned on their labels y1, y2, respectively, and merged as the input x. The original datasets and input merge methods vary for each experiment. All the choices follow uniform distributions.
Architecture To evaluate the influence of function sharing on new outputs, we change the function sharing ability while keeping other properties stable for a deep learning model. So we modify function sharing related to new outputs. Since a new output appears only as a new factor combination in our setting, we adjust the sharings between output factors. It does not need to remove all sharings and avoid difficulties in model design. We choose a layer and duplicate the following layers, keeping the number of all hidden nodes in each layer if feasible (Figure 4). We call the former part a shared network and the latter parts individual networks. Each individual network predicts one output factor, so the output is disentangled. We will discuss entangled outputs in Section 4. We keep the depth of the whole network and change the depth of the shared network and individual networks. Note that only sharing the input layer indicates learning two separate models.
Evaluation metrics We use accuracy as the metric. A sample prediction is correct if all the outputs are correct. We have three types of accuracy. The first is the regular evaluation of test data
for systematic generalization (a: Test Sample Accuracy) corresponding to Theorem 1. We also consider a set of inputs mapped to unseen output combinations corresponding to Proposition 2. We evaluate whether the test samples predict one of the unseen output factor combinations (b: Test Set Accuracy). However, test samples are only a subset of input space. If a model learns a different set of factors, the expected inputs may not be those of test samples. So we also evaluate systematic generalization as a model property for any valid input. We randomly draw test inputs from the whole input space (c: Random Set Accuracy)1. We run each experiment five times and plot the mean and the standard deviation (Figure 5). The result numbers are in Table 2 (Appendix D.3).
a : E(x,y)∼P (Dtest)[δ(f(x) = y)] b : Ex∼P (Xtest)[δ(f(x) ∈ Ytest)] c : Ex∼U [X ][δ(f(x) ∈ Ytest)]
3.2 RESULTS
Fully Connected Network We use an eight-layer fully connected neural network with a flattened image input. We use the Fashion dataset (Xiao et al., 2017) and the MNIST dataset (LeCun et al., 1998). The datasets are uncomplicated to avoid the training data under-fitting for a fully connected neural network. We merge the two inputs by averaging values at each input node.
Convolutional Network We use a convolutional neural network with six convolutional layers and two fully connected layers. We use the CIFAR-10 dataset (Krizhevsky, 2009) and the Fashion dataset (Xiao et al., 2017). We scale the input sizes to that of the larger one and broadcast gray images to colored ones. We merge the inputs by averaging at each node. We use the Fashion dataset as one factor because the average of two colored images can cause training data under-fitting for convolutional neural networks.
Residual Network We use ResNet50 (He et al., 2016), which has five stages, each treated as a layer while changing the shared network depth. It has the dataset setting in the CNN experiment.
Vision Transformer We use Vision Transformer (Dosovitskiy et al., 2021) with one fully connected layer for each patch, five attention layers, and two fully connected layers. We treat the patches as one layer. It has the dataset setting in the CNN experiment.
LSTM A recurrent network has the same parameters for each layer, so it does not support learning different individual networks. Instead, we treat an LSTM as a layer. We use stacked LSTM models with an embedding layer, five bidirectional LSTM layers, and two fully connected layers. We use the Reuters dataset (Joachims, 1998) for both the first and the second datasets. We filter samples by a maximum input length of 200 and use the most frequent ten classes of samples. We merge inputs by concatenating two input texts. The inputs have different lengths because the text lengths vary.
We also run experiments for one-layer LSTM, which compares sharing or not sharing all layers. The results indicate that the shared network has less generalization than the individual network (Table 2).
1δ(·) is 1 if the statement is true and 0 otherwise. U [X ] is the uniform distribution of valid inputs.
Transformer We use Transformer (Vaswani et al., 2017). Since it is a classification problem, we only use the encoder. It has one embedding layer, five hidden layers, and two fully connected layers. We use the same dataset setting as the LSTM experiment.
Summary of results Figure 5 shows that, for each evaluation, the accuracy on the left end (not sharing any hidden layer) is higher than that on the right end (sharing all hidden layers), and it generally decreases as the shared network depth increases. The results indicate that the function sharing weakens systematic generalization.
4 DISCUSSIONS
4.1 ENTANGLED OUTPUT
Though the theory does not require disentangled output, the experiments use it to split the network into individual ones. We discuss that entangled output prediction is not easier than disentangled one. Hence the experiment conclusions will likely extend to entangled outputs. (1) Disentangled output is a particular and usually less complicated case of entangled output. (2) Entangled output can be seen as another shared layer, and the experiments show that increasing shared layers reduces systematic generalization ability, so entangled outputs are also likely to suffer from the problem. (3) We can see each output node of an entangled output as a factor, so it becomes disentangled output. The generalization is even more difficult if a value is unseen for an output node.
4.2 BEYOND FACTOR RECOMBINATION
The definition of systematic generalization (Definition 1) requires that each test label factor is seen in a training label. However, it is not directly used in the derivations, and the conclusions may apply to more general o.o.d. problems beyond recombining factors. A new output may correspond to an unseen activation for an output node, e.g., a new class in a classification problem. In such settings, it is sometimes discussed that the bias parameter of the output is a reason to avoid the new value prediction because it does not have any training signal to increase its value. This work provides another reason for not predicting a new value.
4.3 WHY DOES FUNCTION SHARING HAPPEN IN DEEP LEARNING?
We discuss that function sharing can be caused by characteristics of deep learning: deep architecture, shareable network, and greedy optimization. Deep architecture and shareable networks make it possible for factors to share elaborated functions, and greedy optimization encourages the sharing. Deep architecture and greedy search are necessary for deep learning. Deep learning uses deep architecture to fit complicated non-linear functions. Deep learning has a large and complex parameter space. To search in it, we need some prioritization, which leads to a greedy search. The shareable network is widely used in standard deep learning models and works well for i.i.d. problems. However, it is less critical compared to the other ones.
4.4 POTENTIAL SOLUTIONS
We consider possible solutions to avoid function sharing and achieve systematic generalization. From the above discussion, we look at shareable networks. We consider the recombination of factors and focus on sharing between factors. Then, one potential solution uses individual networks for output factors, similar to the experiment setup. We discuss how to design networks when the input or the output is entangled. If the output is entangled, we can design an architecture where each individual network only changes one factor in the output. For example, one individual network changes color, and another changes shape. If the input is entangled, we need to extract factors from it to feed the individual networks. It contains two questions: how to avoid spurious influence from other factors and keep it working in test distribution. We can bottleneck representations for the first one and divide the input into units invariant in training and test for the other, e.g., words or objects.
5 RELATED WORK
Systematic generalization and deep learning Systematic generalization2 (Fodor & Pylyshyn, 1988; Lake & Baroni, 2018; Bahdanau et al., 2019) is considered the “Great Move” of evolution, caused by the need to process an increasing amount and diversity of environmental information (Newell, 1990). Cognitive scientists see it as central for an organism to view the world (Gallistel & King, 2011). Studies indicate it is related to the prefrontal cortex (Robin & Holyoak, 1995). It was discussed that commonsense is critical (Mccarthy, 1959; Lenat et al., 1986) for systematic generalization, and recent works aim to find general prior knowledge (Goyal & Bengio, 2020), e.g.,
2It is also called compositional generalization in other literature.
Consciousness Prior (Bengio, 2017). Levels of systematicity were defined (Hadley, 1992; Niklasson & van Gelder, 1994), and types of tests were summarized (Hupkes et al., 2020). We focus on the primary case with an unseen combination of seen factor values.
A closely related field is causal learning, rooted in the eighteenth-century (Hume, 2003) and classical fields of AI (Pearl, 2003). It was mainly explored from statistical perspectives (Pearl, 2009; Peters et al., 2016; Greenland et al., 1999; Pearl, 2018) with do-calculus (Pearl, 1995; 2009) and interventions (Peters et al., 2016). The causation forms Independent Causal Mechanisms (ICMs) (Peters et al., 2017; Schölkopf et al., 2021). Systematic generalization is the counterfactual when the joint input distribution is intervened to have new values with zero probability in training (covariate shift). This work indicates that standard neural networks do not prefer to learn ICMs.
Parallel Distributed Processing (PDP) models (Rumelhart et al., 1986) use Connectionist models with distributed representations, which describe an object in terms of a set of factors. Though they have the potential to combine the factors to create unseen object representations (Hinton, 1990), it was criticized that they do not address systematic generalization in general (Fodor & Pylyshyn, 1988; Marcus, 1998). Deep learning is a recent PDP model with many achievements (LeCun et al., 2015; He et al., 2016). It was studied that deep neural networks use the composition of functions to achieve high performance (Montufar et al., 2014). The improvements in i.i.d. problems encourage to equip deep learning with systematic generalization.
Recent directions In addition to architecture design (Russin et al., 2019; Andreas et al., 2016) and data augmentation (Andreas, 2020; Akyürek et al., 2021; Jia & Liang, 2016), the main perspectives for systematic generalization approaches include disentangled representation learning, attention mechanism, and meta-learning.
Disentangled representation (Bengio et al., 2013) is learned in unsupervised manners. Early methods learn the representation from statistical independence (Higgins et al., 2017; Locatello et al., 2019). Later, the definition of disentangled representation was proposed with symmetry transformation (Higgins et al., 2018). It leads to Symmetry-based Disentangled Representation Learning (Caselles-Dupré et al., 2019; Painter et al., 2020; Pfau et al., 2020). A disentangled representation learning model can be used as a feature extractor for other systematic generalization tasks.
Attention mechanisms are widely used in neural networks (Bahdanau et al., 2015). Transformers (Vaswani et al., 2017) are modern neural network architectures with self-attention. Recurrent Independent Mechanisms (Goyal et al., 2021b) use attention and the name of the incoming nodes for variable binding. Global workspace (Goyal et al., 2021a) improves them by using limited-capacity global communication to enable the exchangeability of knowledge. Discrete-valued communication bottleneck (Liu et al., 2021) further enhances systematic generalization ability.
Meta-learning (Lake, 2019) usually designs a series of training tasks for learning a meta-learner and uses it in a target task. Each task has training and test data, where test data requires systematic generalization from training data. When ICMs are available, they can be used to generate meta-learning tasks (Schölkopf et al., 2021). Meta-reinforcement learning was used for causal reasoning (Dasgupta et al., 2019). Meta-learning can also capture the adaptation speed to discover causal relations (Bengio et al., 2020; Ke et al., 2019).
Deep learning is a fast-growing field, and many efforts focus on designing architectures and algorithms to improve its performance. However, it is less discussed why standard deep learning models do not achieve systematic generalization. This paper looks into a built-in conflict.
6 CONCLUSION
This paper investigates a built-in conflict between deep learning and systematic generalization. It explains one of the reasons why standard neural networks seldom achieve systematic generalization. We hypothesize that the conflict is caused by sharing internal functions, and experiments support it. A model partitions an input space into multiple parts separated by boundaries. The function sharing tends to reuse the boundaries, leading to fewer parts for new outputs, which conflicts with systematic generalization. The phenomena are shown in different standard deep neural networks. We hope this finding provides a new understanding of systematic generalization mechanisms in deep learning and helps to improve machine learning algorithms for a higher level of artificial intelligence.
A MORE EXPERIMENTS
We run experiments with natural inputs. For image data, we use NICO++ dataset (Zhang et al., 2022). We use five foregrounds as the first output label and five backgrounds as the second. For text data, we use Amazon reviews (Ni et al., 2019). We use five categories as the first output label and five ratings as the second. Either dataset has two outputs, each containing five possible classes. There are 25 class combinations, and we separate them into 15 training combinations and 10 test ones in a similar way as in the experiment section. For fully connected networks, we use the Fashion dataset (ten classes) and render with ten colors. Please refer to Figure 6 and Table 1 for examples.
For the image dataset, we aggregated foregrounds into five abstract classes, e.g., mammal and vehicle. It better uses the limited data with annotations on combined labels. We use 72,176 image samples. For text data, we randomly select 100,000 samples for each category, with a length limit of 100 tokens. Data with training combinations are randomly split into training and i.i.d. generalization data with a ratio of 9:1.
The results are in Figure 7. Similar to the results in the experiment section, the test accuracies decrease as there are more shared layers. Also, both the training accuracy and the i.i.d. generalization accuracy do not decrease as much as the test accuracies.
We run ablations. Figure 8 shows the results with different layer widths. Figure 9 shows the results with dropout (Srivastava et al., 2014) and mixup (Zhang et al., 2018). All the results have similar effects as the original experiment.
B PROOFS
Proposition 1 (Seen prediction). From Assumption 2, ∀x ∈ X : f(x) ∈ f(Xtrain).
Proof. We are going to prove that for a function g with at least one input mapped to a new output, there exists a more preferred function f with all inputs mapped to seen outputs. Therefore, if a function does not have other functions more preferred over it, the function follows the proposition.
Given a function g, we construct a function f . We pick a x′ ∈ Xtrain. ∀x ∈ X: g(x) ∈ g(Xtrain) : f(x) = g(x),
o.w. : f(x) = f(x′).
Then, ∀xa, xb ∈ X : g(xa) ∈ g(Xtrain) : g(xa) = g(xb) =⇒ f(xa) = g(xa) = g(xb) = f(xb)
o.w. : g(xa) = g(xb) =⇒ g(xb) ̸∈ g(Xtrain) =⇒ f(xa) = f(x′) = f(xb)
In both cases, g(xa) = g(xb) =⇒ f(xa) = f(xb)
On the other hand, ∃x ∈ Xtest : g(x) ̸∈ g(Xtrain) =⇒ g(x) ̸= g(x′) ∈ g(Xtrain), f(x) = f(x′). So f(x) = f(x′) does not imply g(x) = g(x′). With Assumption 2, f is preferred over g.
Also, ∀x ∈ X: g(x) ∈ g(Xtrain) : ∃x′′ ∈ Xtrain : g(x) = g(x′′) =⇒ f(x) = f(x′′) ∈ f(Xtrain)
o.w. : f(x) = f(x′) ∈ f(Xtrain)
Therefore, ∀x ∈ X : f(x) ∈ f(Xtrain).
Proposition 2 (Seen label). From Assumption 1 and Proposition 1, ∀x ∈ X : f(x) ∈ Ytrain.
Proof. From Assumption 1, f(Xtrain) = Ytrain. From Proposition 1,
∀x ∈ X : f(x) ∈ f(Xtrain) = Ytrain
Theorem 1 (The conflict). From Definition 1 and Proposition 2, ∀(x, y) ∈ Dtest : y ̸= f(x).
Proof. ∀(x, y) ∈ Dtest: y ̸∈ Ytrain (Definition 1), x ∈ Xtest ⊆ X =⇒ f(x) ∈ Ytrain (Proposition 2). Therefore y ̸= f(x).
C A CONJECTURE ON WHY THE FUNCTION SHARING HAPPENS
The neural network training process is complicated, so it is hard to describe what happens. Instead, we have a conjecture.
For a large network, when a boundary is learned for the first time, it separates the problem into two sub-problems, and their learning processes do not influence each other for the rest of the training.
We first explain the idea with an analogy of a binary decision tree (or a hierarchical classification). We then define a boundary and discuss its properties.
Binary Decision tree We consider such a binary decision tree that at each decision node, a label set is separated into two parts, each for a sub-tree. For example, if there are 10 classes, the root node may divide them into 6 and 4 classes. Then the root node of the first sub-tree divides the 6 lasses into two sets of 3 classes. In this decision tree, a node separates input space into two parts with disjoint output labels, and each part is learned separately.
Such a decision tree does not predict new outputs because all leaf nodes predict seen outputs. We discuss neural network training process is similar to creating such a decision tree from some aspects. A decision tree node is a boundary in a neural network.
Boundary We consider a problem P = (X , X, Y ) with input space X , input set X , output set Y . We have ground truth mapping f : X → Y and learned mapping f̂ : X → Y . We define a boundary as follows. Definition 2 (Boundary). Suppose we have a binary partition (Xa,Xb) of an input space X . Xa,Xb are non-empty.
Xa∪̇Xb = X , Xa = X ∩ Xa, Xb = X ∩ Xb, Ya = f(Xa), Yb = f(Xb).
It is a boundary if the output sets are disjoint; for each part, all inputs map to its output set.
Ya∪̇Yb = Y, f̂(Xa) = Ya, f̂(Xb) = Yb.
A boundary separates a problem P = (X , X, Y ) to two sub-problems Pa = (Xa, Xa, Ya) and Pb = (Xb, Xb, Yb). We assume that when the boundary is learned for the first time, the learning processes of Pa and Pb do not influence each other for the rest of the training. Assumption 3 (Separate sub-problems). When a network is large enough, learning one sub-problem does not affect the prediction of another sub-problem.
With this assumption, for a large network, a boundary separates the original problem into two problems whose training processes do not influence each other. The assumption applies to each subproblem. When a problem has only one label, all the inputs are mapped to the label. So the model is learned not to predict an unseen output. This learning process is similar to that of a decision tree.
D EXPERIMENT DETAILS
D.1 VISUALIZATION SETTINGS
The model is a fully connected neural network with two input and two output nodes. It has six hidden layers with ReLU activations, and each hidden layer has eight nodes. We use a mini-batch
size of 10 with a learning rate of 0.01. We iterate until the model prediction becomes stable. Please see the original work of deep playground for more information. We use six Intel(R) Core(TM) i5-8400 2.80GHz CPUs, and the asset has a public license.
D.2 EXPERIMENT SETTINGS
We use GeForce GTX 1080 or GeForce GTX 1050 Ti GPU for single GPU experiments. We use TensorFlow for implementation. The assets have a public license.
Each input element is linearly scaled to [-0.5, 0.5] for image input. We also uniformly sample from this interval for random image input. We select two sentence lengths uniformly from valid integers (one to maximum length) and then generate each word uniformly from the vocabulary for random text input.
Fully Connected Network The input shape is 28 × 28, flattened to a vector. There are seven fully connected layers. Each of them has 512 hidden nodes and ReLU activation. The output has ten nodes and Softmax activation. We use cross-entropy loss and Adam optimizer with a learning rate of 0.001. The batch size is 512, and we train 2,000 iterations. Each evaluation uses 10,000 samples.
Convolutional Network The input shape is 32 × 32 × 3. There are seven convolutional layers. Each of them has 3 × 3 kernel size with 64 channels. Then the layer is flattened. We have a fully connected layer with 128 nodes and ReLU activation. The output layer has ten nodes with Softmax activation. We use cross-entropy loss and Adam optimizer with a learning rate of 0.001. The batch size is 512, and we train 5,000 iterations. Each evaluation uses 10,000 samples.
Residual Network The input is the same as CNN. The model is the standard ResNet50 implementation. The hidden groups are treated as one layer, so there are five hidden layers. The hidden layer size is 64. The output layer has ten nodes with Softmax activation. We use cross-entropy loss and Adam optimizer with a learning rate of 0.001. The batch size is 512, and we train 10,000 iterations. Each evaluation uses 10,000 samples.
Vision Transformer The input is the same as CNN. The model is the standard Vision Transformer implementation with seven hidden layers. The hidden layer size is 64. The output layer has ten nodes with Softmax activation. We use cross-entropy loss and Adam optimizer with a learning rate of 0.001. The batch size is 512, and we train 10,000 iterations. Each evaluation uses 10,000 samples.
LSTM The vocabulary size is 30,977, including a start symbol and padding symbol. The input length is 200. The embedding size is 64. There are seven stacked bidirectional LSTM layers, and each has 32 hidden nodes for each direction. Then the output is flattened. The output layer is a fully-connected layer with ten output nodes and Softmax activation. We use cross-entropy loss and Adam optimizer with a learning rate of 0.001. The batch size is 64, and we train 1,000 iterations. Each evaluation uses 10,000 samples.
Transformer The input is the same as that of LSTM. The embedding size is 64. There are seven hidden groups. The hidden layer size is 64. The output is flattened. The output layer is a fullyconnected layer with ten output nodes and Softmax activation. We use cross-entropy loss and Adam optimizer with a learning rate of 0.001. The batch size is 64, and we train 2,000 iterations. Each evaluation uses 10,000 samples.
D.3 EXPERIMENT RESULTS
We numerically compare the individual (left ends in result figures, 0-max) and shared (right ends, max-0) networks in Table 2. LSTM is the stacked LSTM, and LSTM-1 has only one LSTM layer. It shows that shared network has lower scores than individual network on the three types of accuracy. So it indicates that function sharing avoids systematic generalization.
E ZERO-SHOT LEARNING DATASETS
We look at the results for Zero-shot learning. We use aPY (Farhadi et al., 2009), AwA2 (Xian et al., 2019), CUB (Wah et al., 2011), and SUN (Patterson & Hays, 2012) datasets. In these datasets, factors are mainly related to the input locality (Sylvain et al., 2020). We use the pre-extracted input features for aPY and AwA and image input for CUB and SUN. We construct output labels from attributes. Each attribute is a binary number, 1 for existing in a sample and 0 otherwise. We select six attributes with the most balanced average number (close to 0.5) in all data. The first, third, and fifth attributes are used for the first output, and the others for the second output. Each output has eight classes yielded from all the combinations of three binary attributes. For aPY, samples may share the same image, so we construct training data from the original training and test data from the original test data. For other datasets, we split all data to disjoint training and test data.
We use an eight-layer CNN model, the same as in the experiment section. The batch size is 512 for aPY and AwA and 256 for CUB and SUN. Other settings are the same as that in the experiment section. The result is shown in Figure 10 and Table 4. Similar to the previous experiments, the depths of shared and individual networks reduce systematic generalization capability.
F MORE DISCUSSIONS
F.1 TRAINING PROCESS
We discussed that sharing boundaries reduces the number of partitions and shrinks the area for new outputs (Proposition 1). We run experiments to find when this happens during training. We sample 10,000 inputs from test data, and if an output combination has at least 50 samples, we regard it as a new output (o.o.d.) partition. We plot the number of o.o.d. partitions and the test sample ratio in the o.o.d. partitions for shared and individual networks in Figure 11. The experiment settings follow DNN and CNN settings in the experiment section. The results show that the differences start to happen in early training.
F.2 EQUALLY DIFFICULT FACTORS
We look at the results when two inputs are equally hard to learn. We use two CIFAR-10 datasets for both the first and the second datasets. Since the average of two colored images can cause training data under-fitting, we merge the inputs by concatenating them by channel. It means the input channel is six. The result is shown in Figure 12 and Table 4. Similar to the previous experiments, when the difficulties are equal, the depth of the shared network weakens systematic generalization.
F.3 LABEL COMBINATIONS
We also test other training label distribution types. We design tile and one-shot combinations. In tile, a label combination is for training when Y1 < 5 or Y2 < 5. It is similar to the split for illustrative example. In One-shot, a label combination is for training when Y1 < 9 or Y2 < 1. In such a case, only (9, 0) contains Y2 = 0. It is similar to one-shot learning. The results for the fully connected neural network are in Figure 13 and Table 5. It is similar to the results in the experiment section. | 1. What is the main contribution of the paper regarding systematic generalization in deep learning models?
2. What are the strengths and weaknesses of the paper's approach to understanding the impact of parameter sharing on systematic generalization?
3. Do you have any questions or concerns about the paper's problem formulation, writing, and explanation of concepts?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any specific areas where the reviewer would like to see more information or context provided, such as the description of the dataset and models used? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper hypothesise that function sharing is one of the reasons why deep learning models can’t perform systematic generalization. The paper demonstrate that as the degree of parameter sharing increases, the systematic generalization drops. From the practical stand point, the papers argues for sparsity in models that somewhat learn symbolic functions (in term of disentangling feature attributes). Although it’s not touched upon but I believe the paper can be seen from modularity perspective where each module encampasses a particular underlying function, describing certain factor of the input.
Strengths And Weaknesses
Strengths
The paper provides empirical evidence of why parameter sharing (function sharing) leads to performance drop in systematic generalization for deep learning models.
I like the problem space and believe that the authors are on to something tangible here but the lack of rigour in analysis didn't convince me.
Weaknesses
I have issues with problem formulation and writing. Some details of the work are not framed correctly and it’s hard to understand it since no context from previous literature is provided while introducing and explaining new concepts.
Eg. what is implied as functions in deep neural networks? Is it an individual parameter or a set of parameters?
What does the three equations at the end of sec 3.1 refer to?
There is no clear description of the dataset. I can see that it contains factor but what are those factors? How are those factors combined?
On the same note, could you please provide description of the datasets and models used, separately?
The results section just provide information on the models tried. No information on training, and test, train splits of dataset provided.
The author try to ablate different model architectures, however they use different datasets across those architectures thus it’s hard judge if the results are consistent across those architectures or the differences arise from the difference in datasets.
Clarity, Quality, Novelty And Reproducibility
In general, the writing needs to be improved. It was not easy to follow all the details:
From the start, it was difficult to understand what a function means in deep learning context? To understand it better w.r.t machine learning literature, can “function sharing” be reframed in terms of modularity?
Figure 2 is hard to understand. Eg. what does the term “function” refers to in the diagram? |
ICLR | Title
On a Built-in Conflict between Deep Learning and Systematic Generalization
Abstract
Out-of-distribution or systematic generalization is a desirable property that most deep learning algorithms lack. In this paper, we hypothesize that internal function sharing is one of the reasons to weaken systematic generalization in deep learning for classification tasks. Under equivalent prediction, a model partitions an input space into multiple parts separated by boundaries. The function sharing prefers to reuse boundaries, leading to fewer parts for new outputs, which conflicts with systematic generalization. We show such phenomena in standard deep learning models, such as fully connected, convolutional, residual networks, LSTMs, and (Vision) Transformers. We hope this study provides novel insights and forms a basis for new research directions to improve systematic generalization. Source codes are available in the supplementary material.
N/A
Out-of-distribution or systematic generalization is a desirable property that most deep learning algorithms lack. In this paper, we hypothesize that internal function sharing is one of the reasons to weaken systematic generalization in deep learning for classification tasks. Under equivalent prediction, a model partitions an input space into multiple parts separated by boundaries. The function sharing prefers to reuse boundaries, leading to fewer parts for new outputs, which conflicts with systematic generalization. We show such phenomena in standard deep learning models, such as fully connected, convolutional, residual networks, LSTMs, and (Vision) Transformers. We hope this study provides novel insights and forms a basis for new research directions to improve systematic generalization. Source codes are available in the supplementary material.
1 INTRODUCTION
A fundamental property of artificial intelligence is generalization, where a trained model appropriately processes unseen test samples. Many problems adopt the i.i.d. assumption. On the other hand, out-of-distribution (o.o.d.) or systematic generalization (Fodor & Pylyshyn, 1988; Lake & Baroni, 2018) requires that training and test distributions are disjoint so that test samples have zero probability in training distribution. Systematic generalization is crucial for human learning and supports efficient data use and creativity. Therefore, machines are encouraged to acquire such generalization ability to achieve human-like intelligence.
Systematic generalization usually requires that a sample has multiple explanatory factors of variation (Bengio et al., 2013), and the generalization is enabled by producing an unseen combination of seen factor values. For example, models trained on blue rectangles and green triangles predict blue triangles. We adopt factors mainly in designing experiments and developing intuitions. It helps experiments because new outputs are only related to function sharing between factors (Section 3). So we limit our claim to the cases for recombination of factors.
One stream of artificial intelligence is Connectionism (Feldman & Ballard, 1982; Rumelhart et al., 1986), which uses many simple neuron-like units richly interconnected and processed in parallel. It was criticized that Connectionist models do not support systematic generalization well (Fodor & Pylyshyn, 1988; Marcus, 1998). Deep learning (LeCun et al., 2015) originates from Connectionism, and various techniques have enabled multiple-layer modelings and improved performance on i.i.d. problems in recent years. Also, specific algorithms have been proposed to equip deep learning with systematic generalization ability (Russin et al., 2019; Lake, 2019). However, less discussion has been made on why standard deep learning models do not achieve systematic generalization.
This paper addresses the above question by looking into a built-in characteristic of deep learning. A node (or a set of nodes) is a function of the network input. By function sharing, we mean nodes in one layer share inputs from nodes in the previous layer. It may also be called activation sharing or feature sharing. We hypothesize that function sharing is one of the reasons to prevent systematic generalization in deep learning. Under equivalent prediction, a classification network partitions an input space into multiple parts separated by boundaries. Function sharing prefers to reuse boundaries and avoid redundant ones for training predictions. It leads to fewer parts for new outputs and weakens systematic generalization. The nearest neighbor classifier is an analogy for this effect because it predicts a training sample output for any test sample, so it does not have a part for new outputs. We also discuss that function sharing belongs naturally to deep learning (Section 4).
Figure 2 has an intuitive example explaining why the conflict happens. A test sample (in orange) equals a set of training samples (+/+) on
the first output. Then they are also equal on the second output if the function is reused (see caption). Therefore, they are equal in all the outputs, which conflicts with systematic generalization. More generally, the two boundaries jointly partition an input space into multiple parts, and deep learning prefers (a) because it has fewer parts than (b). Figure 3 has a visualized example. It is similar to the example in Figure 2a. The two functions are shared in the top-right region, and few inputs are predicted as the new combination. Figure 1 has a simplified plot for a result in the experiment section. As the degree of function sharing increases (more sharing layers), the accuracy of the test dataset or generalization capacity decreases accordingly. It supports that function sharing weakens systematic generalization. Please refer to Section 3 for more details.
This paper contributes to uncovering a built-in conflict between deep learning and systematic generalization. We hope this study provides novel insights, forms a basis for new research directions, and helps improve machine intelligence to the human level.
2 A BUILT-IN CONFLICT
We hypothesize a built-in conflict between the function sharing in deep learning and systematic generalization. We cover the definition, assumptions, and derivations of the conflict.
2.1 SYSTEMATIC GENERALIZATION
We have input set X and output set Y . Y is decided by K factors Y1, . . . , YK . The training set Dtrain has input Xtrain and output Ytrain. The test set Dtest has input Xtest and output Ytest. In systematic generalization, Ytrain and Ytest are disjoint. However, the values for each factor i are included in the training output. A model f maps input X to the prediction of output f(X). A model enables systematic generalization if it correctly predicts the ground-truth test outputs. Definition 1 (Systematic generalization). The dataset requires that any test label is not a training label, but each factor of a test label is seen in a training label.
∀(x, y) ∈ Dtest : y ̸∈ Ytrain, ∀i = 1, . . . ,K, ∃y′ ∈ Ytrain : y′i = yi A function f enables systematic generalization if ∀(x, y) ∈ Dtest : y = f(x).
When the model is well trained, we assume it correctly predicts training samples. Assumption 1 (Correct training prediction). ∀(x, y) ∈ Dtrain : y = f(x).
2.2 FUNCTION SHARING
In Figure 2, deep learning prefers the model in (a) over the model in (b). We denote the models as f and g (mapping from input to output), respectively. We see that g has one more region for the new factor combination between the magenta and cyan boundaries. This region exists because the magenta boundary in g splits the +/+ region in f into two parts. It means the partition under g refines that under f , but not vice versa.
We assume a property of function sharing. For general models f and g, deep learning prefers f over g more or equally if the partition under g is a refinement of that under f . Equivalently, if two inputs have an identical prediction in g, their predictions are still equal in f . It means an equal prediction in g implies that in f . Assumption 2 (Function sharing). Deep learning prefers f over g more or equally if
∀xa, xb ∈ X : g(xa) = g(xb) =⇒ f(xa) = f(xb)
Note that it is a bias in the learning process. In case we need f to be strictly more preferred over g, we can additionally show that identical prediction in f does not imply that in g.
We consider what causes the function sharing. Intuitively, the preference comes from greedy optimization and the task requirement to learn a complicated model. A model learns to split or partition inputs by different predictions. Some splits (can be a factor) might be easier to learn than others. For example, learning to split inputs by color is more straightforward than by shape. Then the greedy optimization learns it quickly and reuses its intermediate functions to learn other splits. It is not likely that multiple splits (or factors) are learned equally fast during the whole training process.
The mechanism of reusing function is similar to auxiliary tasks. The parameters are updated to address a complicated main task while quickly learning and keeping the prediction ability for more straightforward auxiliary tasks. A more common but less similar example is pre-training. Pretrained modules, such as image feature extractors or word embeddings, share the internal functions with the main target tasks. However, the pre-training and the main task training do not happen simultaneously. We will discuss more insights in Section 4 and Appendix C.
2.3 THE CONFLICT
We derive propositions for proving the theorem and explaining the reasons for the phenomena. In practice, the assumptions may not hold strictly, and the effects come more from the soft biases of the conclusions. The proofs are in Appendix B.
Assumption 2 leads a model to predict training outputs for any input.
Proposition 1 (Seen prediction). From Assumption 2, ∀x ∈ X : f(x) ∈ f(Xtrain).
Informally, suppose there is a non-empty set of all inputs that a function f does not map to training outputs f(Xtrain). In that case, we can design another function f ′ that predicts a training output for these inputs and keeps predictions for other inputs. Then both f and f ′ equivalently distinguish training outputs. With Assumption 2, f ′ is preferred, hence ∀x ∈ X : f(x) ∈ f(Xtrain). It does not apply Assumption 1, indicating that the phenomena may happen before a model is well trained.
If a model performs well for the training set, it predicts training ground-truth output for any input.
Proposition 2 (Seen label). From Assumption 1 and Proposition 1, ∀x ∈ X : f(x) ∈ Ytrain.
It says that the prediction is not any new output. It is a stronger argument than avoiding a particular new output for each input in systematic generalization. It explains that prediction is incorrect because any new output is resisted. We evaluate it in Section 3.
We then look at the conflict. For any test sample (x, y), the definition of systematic generalization requires that the output y is not a training output Ytrain. However, this contradicts Proposition 2.
Theorem 1 (The conflict). From Definition 1 and Proposition 2, ∀(x, y) ∈ Dtest : y ̸= f(x).
We covered systematic generalization definition and function sharing assumption. We then derived propositions and a theorem of the conflict.
3 EXPERIMENTS
We run experiments to show that function sharing reduces the ability of systematic generalization in deep learning. We not only compare sharing or not but also adjust the degree of sharing. We focus on the cases where new outputs are unseen combinations of seen factor values. We cover different standard deep neural network models. The details of networks and experiments can be found in Appendix D. More experiments with natural inputs are in Appendix A. The results of zero-shot learning datasets are in Appendix E, where factors are mainly related to the input locality. We look at the experiment settings and results.
3.1 SETTINGS
Data preparation We construct a dataset from two ten-class classification datasets. The training data are generated from the original training dataset. We first chose output y and chose x based on it. y1 is chosen from all possible labels. y2 is chosen from five classes {y1, y1 + 1, . . . , y1 + 4} (we use modular for labels). The test data are generated from the original test dataset. y1 is chosen in the same way as in training. y2 is chosen from the other classes {y1+5, y1+6, . . . , y1+9}. In this design, training and test label combinations are mutually exclusive, but test labels for each output factor are seen in training. Any factor label appears evenly in training and test combinations. x1 and x2 are chosen conditioned on their labels y1, y2, respectively, and merged as the input x. The original datasets and input merge methods vary for each experiment. All the choices follow uniform distributions.
Architecture To evaluate the influence of function sharing on new outputs, we change the function sharing ability while keeping other properties stable for a deep learning model. So we modify function sharing related to new outputs. Since a new output appears only as a new factor combination in our setting, we adjust the sharings between output factors. It does not need to remove all sharings and avoid difficulties in model design. We choose a layer and duplicate the following layers, keeping the number of all hidden nodes in each layer if feasible (Figure 4). We call the former part a shared network and the latter parts individual networks. Each individual network predicts one output factor, so the output is disentangled. We will discuss entangled outputs in Section 4. We keep the depth of the whole network and change the depth of the shared network and individual networks. Note that only sharing the input layer indicates learning two separate models.
Evaluation metrics We use accuracy as the metric. A sample prediction is correct if all the outputs are correct. We have three types of accuracy. The first is the regular evaluation of test data
for systematic generalization (a: Test Sample Accuracy) corresponding to Theorem 1. We also consider a set of inputs mapped to unseen output combinations corresponding to Proposition 2. We evaluate whether the test samples predict one of the unseen output factor combinations (b: Test Set Accuracy). However, test samples are only a subset of input space. If a model learns a different set of factors, the expected inputs may not be those of test samples. So we also evaluate systematic generalization as a model property for any valid input. We randomly draw test inputs from the whole input space (c: Random Set Accuracy)1. We run each experiment five times and plot the mean and the standard deviation (Figure 5). The result numbers are in Table 2 (Appendix D.3).
a : E(x,y)∼P (Dtest)[δ(f(x) = y)] b : Ex∼P (Xtest)[δ(f(x) ∈ Ytest)] c : Ex∼U [X ][δ(f(x) ∈ Ytest)]
3.2 RESULTS
Fully Connected Network We use an eight-layer fully connected neural network with a flattened image input. We use the Fashion dataset (Xiao et al., 2017) and the MNIST dataset (LeCun et al., 1998). The datasets are uncomplicated to avoid the training data under-fitting for a fully connected neural network. We merge the two inputs by averaging values at each input node.
Convolutional Network We use a convolutional neural network with six convolutional layers and two fully connected layers. We use the CIFAR-10 dataset (Krizhevsky, 2009) and the Fashion dataset (Xiao et al., 2017). We scale the input sizes to that of the larger one and broadcast gray images to colored ones. We merge the inputs by averaging at each node. We use the Fashion dataset as one factor because the average of two colored images can cause training data under-fitting for convolutional neural networks.
Residual Network We use ResNet50 (He et al., 2016), which has five stages, each treated as a layer while changing the shared network depth. It has the dataset setting in the CNN experiment.
Vision Transformer We use Vision Transformer (Dosovitskiy et al., 2021) with one fully connected layer for each patch, five attention layers, and two fully connected layers. We treat the patches as one layer. It has the dataset setting in the CNN experiment.
LSTM A recurrent network has the same parameters for each layer, so it does not support learning different individual networks. Instead, we treat an LSTM as a layer. We use stacked LSTM models with an embedding layer, five bidirectional LSTM layers, and two fully connected layers. We use the Reuters dataset (Joachims, 1998) for both the first and the second datasets. We filter samples by a maximum input length of 200 and use the most frequent ten classes of samples. We merge inputs by concatenating two input texts. The inputs have different lengths because the text lengths vary.
We also run experiments for one-layer LSTM, which compares sharing or not sharing all layers. The results indicate that the shared network has less generalization than the individual network (Table 2).
1δ(·) is 1 if the statement is true and 0 otherwise. U [X ] is the uniform distribution of valid inputs.
Transformer We use Transformer (Vaswani et al., 2017). Since it is a classification problem, we only use the encoder. It has one embedding layer, five hidden layers, and two fully connected layers. We use the same dataset setting as the LSTM experiment.
Summary of results Figure 5 shows that, for each evaluation, the accuracy on the left end (not sharing any hidden layer) is higher than that on the right end (sharing all hidden layers), and it generally decreases as the shared network depth increases. The results indicate that the function sharing weakens systematic generalization.
4 DISCUSSIONS
4.1 ENTANGLED OUTPUT
Though the theory does not require disentangled output, the experiments use it to split the network into individual ones. We discuss that entangled output prediction is not easier than disentangled one. Hence the experiment conclusions will likely extend to entangled outputs. (1) Disentangled output is a particular and usually less complicated case of entangled output. (2) Entangled output can be seen as another shared layer, and the experiments show that increasing shared layers reduces systematic generalization ability, so entangled outputs are also likely to suffer from the problem. (3) We can see each output node of an entangled output as a factor, so it becomes disentangled output. The generalization is even more difficult if a value is unseen for an output node.
4.2 BEYOND FACTOR RECOMBINATION
The definition of systematic generalization (Definition 1) requires that each test label factor is seen in a training label. However, it is not directly used in the derivations, and the conclusions may apply to more general o.o.d. problems beyond recombining factors. A new output may correspond to an unseen activation for an output node, e.g., a new class in a classification problem. In such settings, it is sometimes discussed that the bias parameter of the output is a reason to avoid the new value prediction because it does not have any training signal to increase its value. This work provides another reason for not predicting a new value.
4.3 WHY DOES FUNCTION SHARING HAPPEN IN DEEP LEARNING?
We discuss that function sharing can be caused by characteristics of deep learning: deep architecture, shareable network, and greedy optimization. Deep architecture and shareable networks make it possible for factors to share elaborated functions, and greedy optimization encourages the sharing. Deep architecture and greedy search are necessary for deep learning. Deep learning uses deep architecture to fit complicated non-linear functions. Deep learning has a large and complex parameter space. To search in it, we need some prioritization, which leads to a greedy search. The shareable network is widely used in standard deep learning models and works well for i.i.d. problems. However, it is less critical compared to the other ones.
4.4 POTENTIAL SOLUTIONS
We consider possible solutions to avoid function sharing and achieve systematic generalization. From the above discussion, we look at shareable networks. We consider the recombination of factors and focus on sharing between factors. Then, one potential solution uses individual networks for output factors, similar to the experiment setup. We discuss how to design networks when the input or the output is entangled. If the output is entangled, we can design an architecture where each individual network only changes one factor in the output. For example, one individual network changes color, and another changes shape. If the input is entangled, we need to extract factors from it to feed the individual networks. It contains two questions: how to avoid spurious influence from other factors and keep it working in test distribution. We can bottleneck representations for the first one and divide the input into units invariant in training and test for the other, e.g., words or objects.
5 RELATED WORK
Systematic generalization and deep learning Systematic generalization2 (Fodor & Pylyshyn, 1988; Lake & Baroni, 2018; Bahdanau et al., 2019) is considered the “Great Move” of evolution, caused by the need to process an increasing amount and diversity of environmental information (Newell, 1990). Cognitive scientists see it as central for an organism to view the world (Gallistel & King, 2011). Studies indicate it is related to the prefrontal cortex (Robin & Holyoak, 1995). It was discussed that commonsense is critical (Mccarthy, 1959; Lenat et al., 1986) for systematic generalization, and recent works aim to find general prior knowledge (Goyal & Bengio, 2020), e.g.,
2It is also called compositional generalization in other literature.
Consciousness Prior (Bengio, 2017). Levels of systematicity were defined (Hadley, 1992; Niklasson & van Gelder, 1994), and types of tests were summarized (Hupkes et al., 2020). We focus on the primary case with an unseen combination of seen factor values.
A closely related field is causal learning, rooted in the eighteenth-century (Hume, 2003) and classical fields of AI (Pearl, 2003). It was mainly explored from statistical perspectives (Pearl, 2009; Peters et al., 2016; Greenland et al., 1999; Pearl, 2018) with do-calculus (Pearl, 1995; 2009) and interventions (Peters et al., 2016). The causation forms Independent Causal Mechanisms (ICMs) (Peters et al., 2017; Schölkopf et al., 2021). Systematic generalization is the counterfactual when the joint input distribution is intervened to have new values with zero probability in training (covariate shift). This work indicates that standard neural networks do not prefer to learn ICMs.
Parallel Distributed Processing (PDP) models (Rumelhart et al., 1986) use Connectionist models with distributed representations, which describe an object in terms of a set of factors. Though they have the potential to combine the factors to create unseen object representations (Hinton, 1990), it was criticized that they do not address systematic generalization in general (Fodor & Pylyshyn, 1988; Marcus, 1998). Deep learning is a recent PDP model with many achievements (LeCun et al., 2015; He et al., 2016). It was studied that deep neural networks use the composition of functions to achieve high performance (Montufar et al., 2014). The improvements in i.i.d. problems encourage to equip deep learning with systematic generalization.
Recent directions In addition to architecture design (Russin et al., 2019; Andreas et al., 2016) and data augmentation (Andreas, 2020; Akyürek et al., 2021; Jia & Liang, 2016), the main perspectives for systematic generalization approaches include disentangled representation learning, attention mechanism, and meta-learning.
Disentangled representation (Bengio et al., 2013) is learned in unsupervised manners. Early methods learn the representation from statistical independence (Higgins et al., 2017; Locatello et al., 2019). Later, the definition of disentangled representation was proposed with symmetry transformation (Higgins et al., 2018). It leads to Symmetry-based Disentangled Representation Learning (Caselles-Dupré et al., 2019; Painter et al., 2020; Pfau et al., 2020). A disentangled representation learning model can be used as a feature extractor for other systematic generalization tasks.
Attention mechanisms are widely used in neural networks (Bahdanau et al., 2015). Transformers (Vaswani et al., 2017) are modern neural network architectures with self-attention. Recurrent Independent Mechanisms (Goyal et al., 2021b) use attention and the name of the incoming nodes for variable binding. Global workspace (Goyal et al., 2021a) improves them by using limited-capacity global communication to enable the exchangeability of knowledge. Discrete-valued communication bottleneck (Liu et al., 2021) further enhances systematic generalization ability.
Meta-learning (Lake, 2019) usually designs a series of training tasks for learning a meta-learner and uses it in a target task. Each task has training and test data, where test data requires systematic generalization from training data. When ICMs are available, they can be used to generate meta-learning tasks (Schölkopf et al., 2021). Meta-reinforcement learning was used for causal reasoning (Dasgupta et al., 2019). Meta-learning can also capture the adaptation speed to discover causal relations (Bengio et al., 2020; Ke et al., 2019).
Deep learning is a fast-growing field, and many efforts focus on designing architectures and algorithms to improve its performance. However, it is less discussed why standard deep learning models do not achieve systematic generalization. This paper looks into a built-in conflict.
6 CONCLUSION
This paper investigates a built-in conflict between deep learning and systematic generalization. It explains one of the reasons why standard neural networks seldom achieve systematic generalization. We hypothesize that the conflict is caused by sharing internal functions, and experiments support it. A model partitions an input space into multiple parts separated by boundaries. The function sharing tends to reuse the boundaries, leading to fewer parts for new outputs, which conflicts with systematic generalization. The phenomena are shown in different standard deep neural networks. We hope this finding provides a new understanding of systematic generalization mechanisms in deep learning and helps to improve machine learning algorithms for a higher level of artificial intelligence.
A MORE EXPERIMENTS
We run experiments with natural inputs. For image data, we use NICO++ dataset (Zhang et al., 2022). We use five foregrounds as the first output label and five backgrounds as the second. For text data, we use Amazon reviews (Ni et al., 2019). We use five categories as the first output label and five ratings as the second. Either dataset has two outputs, each containing five possible classes. There are 25 class combinations, and we separate them into 15 training combinations and 10 test ones in a similar way as in the experiment section. For fully connected networks, we use the Fashion dataset (ten classes) and render with ten colors. Please refer to Figure 6 and Table 1 for examples.
For the image dataset, we aggregated foregrounds into five abstract classes, e.g., mammal and vehicle. It better uses the limited data with annotations on combined labels. We use 72,176 image samples. For text data, we randomly select 100,000 samples for each category, with a length limit of 100 tokens. Data with training combinations are randomly split into training and i.i.d. generalization data with a ratio of 9:1.
The results are in Figure 7. Similar to the results in the experiment section, the test accuracies decrease as there are more shared layers. Also, both the training accuracy and the i.i.d. generalization accuracy do not decrease as much as the test accuracies.
We run ablations. Figure 8 shows the results with different layer widths. Figure 9 shows the results with dropout (Srivastava et al., 2014) and mixup (Zhang et al., 2018). All the results have similar effects as the original experiment.
B PROOFS
Proposition 1 (Seen prediction). From Assumption 2, ∀x ∈ X : f(x) ∈ f(Xtrain).
Proof. We are going to prove that for a function g with at least one input mapped to a new output, there exists a more preferred function f with all inputs mapped to seen outputs. Therefore, if a function does not have other functions more preferred over it, the function follows the proposition.
Given a function g, we construct a function f . We pick a x′ ∈ Xtrain. ∀x ∈ X: g(x) ∈ g(Xtrain) : f(x) = g(x),
o.w. : f(x) = f(x′).
Then, ∀xa, xb ∈ X : g(xa) ∈ g(Xtrain) : g(xa) = g(xb) =⇒ f(xa) = g(xa) = g(xb) = f(xb)
o.w. : g(xa) = g(xb) =⇒ g(xb) ̸∈ g(Xtrain) =⇒ f(xa) = f(x′) = f(xb)
In both cases, g(xa) = g(xb) =⇒ f(xa) = f(xb)
On the other hand, ∃x ∈ Xtest : g(x) ̸∈ g(Xtrain) =⇒ g(x) ̸= g(x′) ∈ g(Xtrain), f(x) = f(x′). So f(x) = f(x′) does not imply g(x) = g(x′). With Assumption 2, f is preferred over g.
Also, ∀x ∈ X: g(x) ∈ g(Xtrain) : ∃x′′ ∈ Xtrain : g(x) = g(x′′) =⇒ f(x) = f(x′′) ∈ f(Xtrain)
o.w. : f(x) = f(x′) ∈ f(Xtrain)
Therefore, ∀x ∈ X : f(x) ∈ f(Xtrain).
Proposition 2 (Seen label). From Assumption 1 and Proposition 1, ∀x ∈ X : f(x) ∈ Ytrain.
Proof. From Assumption 1, f(Xtrain) = Ytrain. From Proposition 1,
∀x ∈ X : f(x) ∈ f(Xtrain) = Ytrain
Theorem 1 (The conflict). From Definition 1 and Proposition 2, ∀(x, y) ∈ Dtest : y ̸= f(x).
Proof. ∀(x, y) ∈ Dtest: y ̸∈ Ytrain (Definition 1), x ∈ Xtest ⊆ X =⇒ f(x) ∈ Ytrain (Proposition 2). Therefore y ̸= f(x).
C A CONJECTURE ON WHY THE FUNCTION SHARING HAPPENS
The neural network training process is complicated, so it is hard to describe what happens. Instead, we have a conjecture.
For a large network, when a boundary is learned for the first time, it separates the problem into two sub-problems, and their learning processes do not influence each other for the rest of the training.
We first explain the idea with an analogy of a binary decision tree (or a hierarchical classification). We then define a boundary and discuss its properties.
Binary Decision tree We consider such a binary decision tree that at each decision node, a label set is separated into two parts, each for a sub-tree. For example, if there are 10 classes, the root node may divide them into 6 and 4 classes. Then the root node of the first sub-tree divides the 6 lasses into two sets of 3 classes. In this decision tree, a node separates input space into two parts with disjoint output labels, and each part is learned separately.
Such a decision tree does not predict new outputs because all leaf nodes predict seen outputs. We discuss neural network training process is similar to creating such a decision tree from some aspects. A decision tree node is a boundary in a neural network.
Boundary We consider a problem P = (X , X, Y ) with input space X , input set X , output set Y . We have ground truth mapping f : X → Y and learned mapping f̂ : X → Y . We define a boundary as follows. Definition 2 (Boundary). Suppose we have a binary partition (Xa,Xb) of an input space X . Xa,Xb are non-empty.
Xa∪̇Xb = X , Xa = X ∩ Xa, Xb = X ∩ Xb, Ya = f(Xa), Yb = f(Xb).
It is a boundary if the output sets are disjoint; for each part, all inputs map to its output set.
Ya∪̇Yb = Y, f̂(Xa) = Ya, f̂(Xb) = Yb.
A boundary separates a problem P = (X , X, Y ) to two sub-problems Pa = (Xa, Xa, Ya) and Pb = (Xb, Xb, Yb). We assume that when the boundary is learned for the first time, the learning processes of Pa and Pb do not influence each other for the rest of the training. Assumption 3 (Separate sub-problems). When a network is large enough, learning one sub-problem does not affect the prediction of another sub-problem.
With this assumption, for a large network, a boundary separates the original problem into two problems whose training processes do not influence each other. The assumption applies to each subproblem. When a problem has only one label, all the inputs are mapped to the label. So the model is learned not to predict an unseen output. This learning process is similar to that of a decision tree.
D EXPERIMENT DETAILS
D.1 VISUALIZATION SETTINGS
The model is a fully connected neural network with two input and two output nodes. It has six hidden layers with ReLU activations, and each hidden layer has eight nodes. We use a mini-batch
size of 10 with a learning rate of 0.01. We iterate until the model prediction becomes stable. Please see the original work of deep playground for more information. We use six Intel(R) Core(TM) i5-8400 2.80GHz CPUs, and the asset has a public license.
D.2 EXPERIMENT SETTINGS
We use GeForce GTX 1080 or GeForce GTX 1050 Ti GPU for single GPU experiments. We use TensorFlow for implementation. The assets have a public license.
Each input element is linearly scaled to [-0.5, 0.5] for image input. We also uniformly sample from this interval for random image input. We select two sentence lengths uniformly from valid integers (one to maximum length) and then generate each word uniformly from the vocabulary for random text input.
Fully Connected Network The input shape is 28 × 28, flattened to a vector. There are seven fully connected layers. Each of them has 512 hidden nodes and ReLU activation. The output has ten nodes and Softmax activation. We use cross-entropy loss and Adam optimizer with a learning rate of 0.001. The batch size is 512, and we train 2,000 iterations. Each evaluation uses 10,000 samples.
Convolutional Network The input shape is 32 × 32 × 3. There are seven convolutional layers. Each of them has 3 × 3 kernel size with 64 channels. Then the layer is flattened. We have a fully connected layer with 128 nodes and ReLU activation. The output layer has ten nodes with Softmax activation. We use cross-entropy loss and Adam optimizer with a learning rate of 0.001. The batch size is 512, and we train 5,000 iterations. Each evaluation uses 10,000 samples.
Residual Network The input is the same as CNN. The model is the standard ResNet50 implementation. The hidden groups are treated as one layer, so there are five hidden layers. The hidden layer size is 64. The output layer has ten nodes with Softmax activation. We use cross-entropy loss and Adam optimizer with a learning rate of 0.001. The batch size is 512, and we train 10,000 iterations. Each evaluation uses 10,000 samples.
Vision Transformer The input is the same as CNN. The model is the standard Vision Transformer implementation with seven hidden layers. The hidden layer size is 64. The output layer has ten nodes with Softmax activation. We use cross-entropy loss and Adam optimizer with a learning rate of 0.001. The batch size is 512, and we train 10,000 iterations. Each evaluation uses 10,000 samples.
LSTM The vocabulary size is 30,977, including a start symbol and padding symbol. The input length is 200. The embedding size is 64. There are seven stacked bidirectional LSTM layers, and each has 32 hidden nodes for each direction. Then the output is flattened. The output layer is a fully-connected layer with ten output nodes and Softmax activation. We use cross-entropy loss and Adam optimizer with a learning rate of 0.001. The batch size is 64, and we train 1,000 iterations. Each evaluation uses 10,000 samples.
Transformer The input is the same as that of LSTM. The embedding size is 64. There are seven hidden groups. The hidden layer size is 64. The output is flattened. The output layer is a fullyconnected layer with ten output nodes and Softmax activation. We use cross-entropy loss and Adam optimizer with a learning rate of 0.001. The batch size is 64, and we train 2,000 iterations. Each evaluation uses 10,000 samples.
D.3 EXPERIMENT RESULTS
We numerically compare the individual (left ends in result figures, 0-max) and shared (right ends, max-0) networks in Table 2. LSTM is the stacked LSTM, and LSTM-1 has only one LSTM layer. It shows that shared network has lower scores than individual network on the three types of accuracy. So it indicates that function sharing avoids systematic generalization.
E ZERO-SHOT LEARNING DATASETS
We look at the results for Zero-shot learning. We use aPY (Farhadi et al., 2009), AwA2 (Xian et al., 2019), CUB (Wah et al., 2011), and SUN (Patterson & Hays, 2012) datasets. In these datasets, factors are mainly related to the input locality (Sylvain et al., 2020). We use the pre-extracted input features for aPY and AwA and image input for CUB and SUN. We construct output labels from attributes. Each attribute is a binary number, 1 for existing in a sample and 0 otherwise. We select six attributes with the most balanced average number (close to 0.5) in all data. The first, third, and fifth attributes are used for the first output, and the others for the second output. Each output has eight classes yielded from all the combinations of three binary attributes. For aPY, samples may share the same image, so we construct training data from the original training and test data from the original test data. For other datasets, we split all data to disjoint training and test data.
We use an eight-layer CNN model, the same as in the experiment section. The batch size is 512 for aPY and AwA and 256 for CUB and SUN. Other settings are the same as that in the experiment section. The result is shown in Figure 10 and Table 4. Similar to the previous experiments, the depths of shared and individual networks reduce systematic generalization capability.
F MORE DISCUSSIONS
F.1 TRAINING PROCESS
We discussed that sharing boundaries reduces the number of partitions and shrinks the area for new outputs (Proposition 1). We run experiments to find when this happens during training. We sample 10,000 inputs from test data, and if an output combination has at least 50 samples, we regard it as a new output (o.o.d.) partition. We plot the number of o.o.d. partitions and the test sample ratio in the o.o.d. partitions for shared and individual networks in Figure 11. The experiment settings follow DNN and CNN settings in the experiment section. The results show that the differences start to happen in early training.
F.2 EQUALLY DIFFICULT FACTORS
We look at the results when two inputs are equally hard to learn. We use two CIFAR-10 datasets for both the first and the second datasets. Since the average of two colored images can cause training data under-fitting, we merge the inputs by concatenating them by channel. It means the input channel is six. The result is shown in Figure 12 and Table 4. Similar to the previous experiments, when the difficulties are equal, the depth of the shared network weakens systematic generalization.
F.3 LABEL COMBINATIONS
We also test other training label distribution types. We design tile and one-shot combinations. In tile, a label combination is for training when Y1 < 5 or Y2 < 5. It is similar to the split for illustrative example. In One-shot, a label combination is for training when Y1 < 9 or Y2 < 1. In such a case, only (9, 0) contains Y2 = 0. It is similar to one-shot learning. The results for the fully connected neural network are in Figure 13 and Table 5. It is similar to the results in the experiment section. | 1. What is the focus of the paper regarding deep neural networks?
2. What are the strengths and weaknesses of the proposed approach?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any questions or concerns regarding the paper's experiments and conclusions? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper investigates systematic generalization in deep neural networks. Systematic generalization here refers to the ability of an algorithm to produce outputs that were not observed during training time. A potential reason for this is postulated: the lack of systematic generalization in deep neural networks is due to function sharing, i.e. that each layer in the network uses a common representation from the previous layer. Experiments show that networks with fewer shared intermediate layers exhibit a greater degree systematic generalization than those with more shared layers.
Strengths And Weaknesses
Strengths
The broad aim of investigating stronger forms of generalization that move beyond the i.i.d. case is interesting.
The experimental observation that having fewer shared layers leads to better systematic generalization holds across multiple diverse architectures and datasets.
As far as I know, investigating function sharing as a reason for lack of systematic generalization is a novel approach.
Weaknesses
The paper is not written very clearly. Sections that are difficult to understand include: the mathematical notation (e.g. writing that
f
is a "model" but not explaining that this is simply a mapping from the input space to output space), the experiment section (particularly how the labels were generated and what the different evaluation metrics mean), and the discussion section.
Some intuitions are claimed but not supported by adequate evidence: that deep neural networks prefer to learn a simple function and combine with previously learned functions, and that neural networks greedily learn functions in order of simpler to more complex.
The experiments are not complete. Some relevant but missing pieces of information include training accuracy and computation time for the various levels of sharing. On a related note, the motivation for including the test set and random set accuracy metrics is unclear.
It is not clear how the knowledge introduced by this paper can be effectively used to improve systematic generalization. Training a multitude of independent networks, one for each underlying factor, does not seem like a practical course of action due to storage and computation constraints.
Clarity, Quality, Novelty And Reproducibility
As mentioned above, there are some significant issues with clarity, but I believe it is possible for them to be resolved in an updated version of the paper. The paper also seems novel enough in that it looks at function sharing as a potential underlying cause for lack of systematic generalization. The quality is below average, with issues including incomplete experimental evaluation and unsupported claims (e.g. greedy learning of functions and the mechanisms underlying function sharing). I also found the design choice to average inputs from two separate datasets to be unconventional.
One point that seems quite relevant but not addressed by the paper is the extent to which the phenomenon in Figure 2 is caused by a softmax activation, which assumes that classes are mutually exclusive. This seems to be an alternate reason that the case in Figure 2(b) does not arise, since this region occupied by the orange dot would be a region of low confidence and thus the network would be incentivized to sharpen the decision boundaries. Could this possibly be resolved by assuming a multi-output loss function, e.g. multiple sigmoids? Regarding reproducibility, code is included and thus reproducing the results does not seem to be a major barrier. |
ICLR | Title
On a Built-in Conflict between Deep Learning and Systematic Generalization
Abstract
Out-of-distribution or systematic generalization is a desirable property that most deep learning algorithms lack. In this paper, we hypothesize that internal function sharing is one of the reasons to weaken systematic generalization in deep learning for classification tasks. Under equivalent prediction, a model partitions an input space into multiple parts separated by boundaries. The function sharing prefers to reuse boundaries, leading to fewer parts for new outputs, which conflicts with systematic generalization. We show such phenomena in standard deep learning models, such as fully connected, convolutional, residual networks, LSTMs, and (Vision) Transformers. We hope this study provides novel insights and forms a basis for new research directions to improve systematic generalization. Source codes are available in the supplementary material.
N/A
Out-of-distribution or systematic generalization is a desirable property that most deep learning algorithms lack. In this paper, we hypothesize that internal function sharing is one of the reasons to weaken systematic generalization in deep learning for classification tasks. Under equivalent prediction, a model partitions an input space into multiple parts separated by boundaries. The function sharing prefers to reuse boundaries, leading to fewer parts for new outputs, which conflicts with systematic generalization. We show such phenomena in standard deep learning models, such as fully connected, convolutional, residual networks, LSTMs, and (Vision) Transformers. We hope this study provides novel insights and forms a basis for new research directions to improve systematic generalization. Source codes are available in the supplementary material.
1 INTRODUCTION
A fundamental property of artificial intelligence is generalization, where a trained model appropriately processes unseen test samples. Many problems adopt the i.i.d. assumption. On the other hand, out-of-distribution (o.o.d.) or systematic generalization (Fodor & Pylyshyn, 1988; Lake & Baroni, 2018) requires that training and test distributions are disjoint so that test samples have zero probability in training distribution. Systematic generalization is crucial for human learning and supports efficient data use and creativity. Therefore, machines are encouraged to acquire such generalization ability to achieve human-like intelligence.
Systematic generalization usually requires that a sample has multiple explanatory factors of variation (Bengio et al., 2013), and the generalization is enabled by producing an unseen combination of seen factor values. For example, models trained on blue rectangles and green triangles predict blue triangles. We adopt factors mainly in designing experiments and developing intuitions. It helps experiments because new outputs are only related to function sharing between factors (Section 3). So we limit our claim to the cases for recombination of factors.
One stream of artificial intelligence is Connectionism (Feldman & Ballard, 1982; Rumelhart et al., 1986), which uses many simple neuron-like units richly interconnected and processed in parallel. It was criticized that Connectionist models do not support systematic generalization well (Fodor & Pylyshyn, 1988; Marcus, 1998). Deep learning (LeCun et al., 2015) originates from Connectionism, and various techniques have enabled multiple-layer modelings and improved performance on i.i.d. problems in recent years. Also, specific algorithms have been proposed to equip deep learning with systematic generalization ability (Russin et al., 2019; Lake, 2019). However, less discussion has been made on why standard deep learning models do not achieve systematic generalization.
This paper addresses the above question by looking into a built-in characteristic of deep learning. A node (or a set of nodes) is a function of the network input. By function sharing, we mean nodes in one layer share inputs from nodes in the previous layer. It may also be called activation sharing or feature sharing. We hypothesize that function sharing is one of the reasons to prevent systematic generalization in deep learning. Under equivalent prediction, a classification network partitions an input space into multiple parts separated by boundaries. Function sharing prefers to reuse boundaries and avoid redundant ones for training predictions. It leads to fewer parts for new outputs and weakens systematic generalization. The nearest neighbor classifier is an analogy for this effect because it predicts a training sample output for any test sample, so it does not have a part for new outputs. We also discuss that function sharing belongs naturally to deep learning (Section 4).
Figure 2 has an intuitive example explaining why the conflict happens. A test sample (in orange) equals a set of training samples (+/+) on
the first output. Then they are also equal on the second output if the function is reused (see caption). Therefore, they are equal in all the outputs, which conflicts with systematic generalization. More generally, the two boundaries jointly partition an input space into multiple parts, and deep learning prefers (a) because it has fewer parts than (b). Figure 3 has a visualized example. It is similar to the example in Figure 2a. The two functions are shared in the top-right region, and few inputs are predicted as the new combination. Figure 1 has a simplified plot for a result in the experiment section. As the degree of function sharing increases (more sharing layers), the accuracy of the test dataset or generalization capacity decreases accordingly. It supports that function sharing weakens systematic generalization. Please refer to Section 3 for more details.
This paper contributes to uncovering a built-in conflict between deep learning and systematic generalization. We hope this study provides novel insights, forms a basis for new research directions, and helps improve machine intelligence to the human level.
2 A BUILT-IN CONFLICT
We hypothesize a built-in conflict between the function sharing in deep learning and systematic generalization. We cover the definition, assumptions, and derivations of the conflict.
2.1 SYSTEMATIC GENERALIZATION
We have input set X and output set Y . Y is decided by K factors Y1, . . . , YK . The training set Dtrain has input Xtrain and output Ytrain. The test set Dtest has input Xtest and output Ytest. In systematic generalization, Ytrain and Ytest are disjoint. However, the values for each factor i are included in the training output. A model f maps input X to the prediction of output f(X). A model enables systematic generalization if it correctly predicts the ground-truth test outputs. Definition 1 (Systematic generalization). The dataset requires that any test label is not a training label, but each factor of a test label is seen in a training label.
∀(x, y) ∈ Dtest : y ̸∈ Ytrain, ∀i = 1, . . . ,K, ∃y′ ∈ Ytrain : y′i = yi A function f enables systematic generalization if ∀(x, y) ∈ Dtest : y = f(x).
When the model is well trained, we assume it correctly predicts training samples. Assumption 1 (Correct training prediction). ∀(x, y) ∈ Dtrain : y = f(x).
2.2 FUNCTION SHARING
In Figure 2, deep learning prefers the model in (a) over the model in (b). We denote the models as f and g (mapping from input to output), respectively. We see that g has one more region for the new factor combination between the magenta and cyan boundaries. This region exists because the magenta boundary in g splits the +/+ region in f into two parts. It means the partition under g refines that under f , but not vice versa.
We assume a property of function sharing. For general models f and g, deep learning prefers f over g more or equally if the partition under g is a refinement of that under f . Equivalently, if two inputs have an identical prediction in g, their predictions are still equal in f . It means an equal prediction in g implies that in f . Assumption 2 (Function sharing). Deep learning prefers f over g more or equally if
∀xa, xb ∈ X : g(xa) = g(xb) =⇒ f(xa) = f(xb)
Note that it is a bias in the learning process. In case we need f to be strictly more preferred over g, we can additionally show that identical prediction in f does not imply that in g.
We consider what causes the function sharing. Intuitively, the preference comes from greedy optimization and the task requirement to learn a complicated model. A model learns to split or partition inputs by different predictions. Some splits (can be a factor) might be easier to learn than others. For example, learning to split inputs by color is more straightforward than by shape. Then the greedy optimization learns it quickly and reuses its intermediate functions to learn other splits. It is not likely that multiple splits (or factors) are learned equally fast during the whole training process.
The mechanism of reusing function is similar to auxiliary tasks. The parameters are updated to address a complicated main task while quickly learning and keeping the prediction ability for more straightforward auxiliary tasks. A more common but less similar example is pre-training. Pretrained modules, such as image feature extractors or word embeddings, share the internal functions with the main target tasks. However, the pre-training and the main task training do not happen simultaneously. We will discuss more insights in Section 4 and Appendix C.
2.3 THE CONFLICT
We derive propositions for proving the theorem and explaining the reasons for the phenomena. In practice, the assumptions may not hold strictly, and the effects come more from the soft biases of the conclusions. The proofs are in Appendix B.
Assumption 2 leads a model to predict training outputs for any input.
Proposition 1 (Seen prediction). From Assumption 2, ∀x ∈ X : f(x) ∈ f(Xtrain).
Informally, suppose there is a non-empty set of all inputs that a function f does not map to training outputs f(Xtrain). In that case, we can design another function f ′ that predicts a training output for these inputs and keeps predictions for other inputs. Then both f and f ′ equivalently distinguish training outputs. With Assumption 2, f ′ is preferred, hence ∀x ∈ X : f(x) ∈ f(Xtrain). It does not apply Assumption 1, indicating that the phenomena may happen before a model is well trained.
If a model performs well for the training set, it predicts training ground-truth output for any input.
Proposition 2 (Seen label). From Assumption 1 and Proposition 1, ∀x ∈ X : f(x) ∈ Ytrain.
It says that the prediction is not any new output. It is a stronger argument than avoiding a particular new output for each input in systematic generalization. It explains that prediction is incorrect because any new output is resisted. We evaluate it in Section 3.
We then look at the conflict. For any test sample (x, y), the definition of systematic generalization requires that the output y is not a training output Ytrain. However, this contradicts Proposition 2.
Theorem 1 (The conflict). From Definition 1 and Proposition 2, ∀(x, y) ∈ Dtest : y ̸= f(x).
We covered systematic generalization definition and function sharing assumption. We then derived propositions and a theorem of the conflict.
3 EXPERIMENTS
We run experiments to show that function sharing reduces the ability of systematic generalization in deep learning. We not only compare sharing or not but also adjust the degree of sharing. We focus on the cases where new outputs are unseen combinations of seen factor values. We cover different standard deep neural network models. The details of networks and experiments can be found in Appendix D. More experiments with natural inputs are in Appendix A. The results of zero-shot learning datasets are in Appendix E, where factors are mainly related to the input locality. We look at the experiment settings and results.
3.1 SETTINGS
Data preparation We construct a dataset from two ten-class classification datasets. The training data are generated from the original training dataset. We first chose output y and chose x based on it. y1 is chosen from all possible labels. y2 is chosen from five classes {y1, y1 + 1, . . . , y1 + 4} (we use modular for labels). The test data are generated from the original test dataset. y1 is chosen in the same way as in training. y2 is chosen from the other classes {y1+5, y1+6, . . . , y1+9}. In this design, training and test label combinations are mutually exclusive, but test labels for each output factor are seen in training. Any factor label appears evenly in training and test combinations. x1 and x2 are chosen conditioned on their labels y1, y2, respectively, and merged as the input x. The original datasets and input merge methods vary for each experiment. All the choices follow uniform distributions.
Architecture To evaluate the influence of function sharing on new outputs, we change the function sharing ability while keeping other properties stable for a deep learning model. So we modify function sharing related to new outputs. Since a new output appears only as a new factor combination in our setting, we adjust the sharings between output factors. It does not need to remove all sharings and avoid difficulties in model design. We choose a layer and duplicate the following layers, keeping the number of all hidden nodes in each layer if feasible (Figure 4). We call the former part a shared network and the latter parts individual networks. Each individual network predicts one output factor, so the output is disentangled. We will discuss entangled outputs in Section 4. We keep the depth of the whole network and change the depth of the shared network and individual networks. Note that only sharing the input layer indicates learning two separate models.
Evaluation metrics We use accuracy as the metric. A sample prediction is correct if all the outputs are correct. We have three types of accuracy. The first is the regular evaluation of test data
for systematic generalization (a: Test Sample Accuracy) corresponding to Theorem 1. We also consider a set of inputs mapped to unseen output combinations corresponding to Proposition 2. We evaluate whether the test samples predict one of the unseen output factor combinations (b: Test Set Accuracy). However, test samples are only a subset of input space. If a model learns a different set of factors, the expected inputs may not be those of test samples. So we also evaluate systematic generalization as a model property for any valid input. We randomly draw test inputs from the whole input space (c: Random Set Accuracy)1. We run each experiment five times and plot the mean and the standard deviation (Figure 5). The result numbers are in Table 2 (Appendix D.3).
a : E(x,y)∼P (Dtest)[δ(f(x) = y)] b : Ex∼P (Xtest)[δ(f(x) ∈ Ytest)] c : Ex∼U [X ][δ(f(x) ∈ Ytest)]
3.2 RESULTS
Fully Connected Network We use an eight-layer fully connected neural network with a flattened image input. We use the Fashion dataset (Xiao et al., 2017) and the MNIST dataset (LeCun et al., 1998). The datasets are uncomplicated to avoid the training data under-fitting for a fully connected neural network. We merge the two inputs by averaging values at each input node.
Convolutional Network We use a convolutional neural network with six convolutional layers and two fully connected layers. We use the CIFAR-10 dataset (Krizhevsky, 2009) and the Fashion dataset (Xiao et al., 2017). We scale the input sizes to that of the larger one and broadcast gray images to colored ones. We merge the inputs by averaging at each node. We use the Fashion dataset as one factor because the average of two colored images can cause training data under-fitting for convolutional neural networks.
Residual Network We use ResNet50 (He et al., 2016), which has five stages, each treated as a layer while changing the shared network depth. It has the dataset setting in the CNN experiment.
Vision Transformer We use Vision Transformer (Dosovitskiy et al., 2021) with one fully connected layer for each patch, five attention layers, and two fully connected layers. We treat the patches as one layer. It has the dataset setting in the CNN experiment.
LSTM A recurrent network has the same parameters for each layer, so it does not support learning different individual networks. Instead, we treat an LSTM as a layer. We use stacked LSTM models with an embedding layer, five bidirectional LSTM layers, and two fully connected layers. We use the Reuters dataset (Joachims, 1998) for both the first and the second datasets. We filter samples by a maximum input length of 200 and use the most frequent ten classes of samples. We merge inputs by concatenating two input texts. The inputs have different lengths because the text lengths vary.
We also run experiments for one-layer LSTM, which compares sharing or not sharing all layers. The results indicate that the shared network has less generalization than the individual network (Table 2).
1δ(·) is 1 if the statement is true and 0 otherwise. U [X ] is the uniform distribution of valid inputs.
Transformer We use Transformer (Vaswani et al., 2017). Since it is a classification problem, we only use the encoder. It has one embedding layer, five hidden layers, and two fully connected layers. We use the same dataset setting as the LSTM experiment.
Summary of results Figure 5 shows that, for each evaluation, the accuracy on the left end (not sharing any hidden layer) is higher than that on the right end (sharing all hidden layers), and it generally decreases as the shared network depth increases. The results indicate that the function sharing weakens systematic generalization.
4 DISCUSSIONS
4.1 ENTANGLED OUTPUT
Though the theory does not require disentangled output, the experiments use it to split the network into individual ones. We discuss that entangled output prediction is not easier than disentangled one. Hence the experiment conclusions will likely extend to entangled outputs. (1) Disentangled output is a particular and usually less complicated case of entangled output. (2) Entangled output can be seen as another shared layer, and the experiments show that increasing shared layers reduces systematic generalization ability, so entangled outputs are also likely to suffer from the problem. (3) We can see each output node of an entangled output as a factor, so it becomes disentangled output. The generalization is even more difficult if a value is unseen for an output node.
4.2 BEYOND FACTOR RECOMBINATION
The definition of systematic generalization (Definition 1) requires that each test label factor is seen in a training label. However, it is not directly used in the derivations, and the conclusions may apply to more general o.o.d. problems beyond recombining factors. A new output may correspond to an unseen activation for an output node, e.g., a new class in a classification problem. In such settings, it is sometimes discussed that the bias parameter of the output is a reason to avoid the new value prediction because it does not have any training signal to increase its value. This work provides another reason for not predicting a new value.
4.3 WHY DOES FUNCTION SHARING HAPPEN IN DEEP LEARNING?
We discuss that function sharing can be caused by characteristics of deep learning: deep architecture, shareable network, and greedy optimization. Deep architecture and shareable networks make it possible for factors to share elaborated functions, and greedy optimization encourages the sharing. Deep architecture and greedy search are necessary for deep learning. Deep learning uses deep architecture to fit complicated non-linear functions. Deep learning has a large and complex parameter space. To search in it, we need some prioritization, which leads to a greedy search. The shareable network is widely used in standard deep learning models and works well for i.i.d. problems. However, it is less critical compared to the other ones.
4.4 POTENTIAL SOLUTIONS
We consider possible solutions to avoid function sharing and achieve systematic generalization. From the above discussion, we look at shareable networks. We consider the recombination of factors and focus on sharing between factors. Then, one potential solution uses individual networks for output factors, similar to the experiment setup. We discuss how to design networks when the input or the output is entangled. If the output is entangled, we can design an architecture where each individual network only changes one factor in the output. For example, one individual network changes color, and another changes shape. If the input is entangled, we need to extract factors from it to feed the individual networks. It contains two questions: how to avoid spurious influence from other factors and keep it working in test distribution. We can bottleneck representations for the first one and divide the input into units invariant in training and test for the other, e.g., words or objects.
5 RELATED WORK
Systematic generalization and deep learning Systematic generalization2 (Fodor & Pylyshyn, 1988; Lake & Baroni, 2018; Bahdanau et al., 2019) is considered the “Great Move” of evolution, caused by the need to process an increasing amount and diversity of environmental information (Newell, 1990). Cognitive scientists see it as central for an organism to view the world (Gallistel & King, 2011). Studies indicate it is related to the prefrontal cortex (Robin & Holyoak, 1995). It was discussed that commonsense is critical (Mccarthy, 1959; Lenat et al., 1986) for systematic generalization, and recent works aim to find general prior knowledge (Goyal & Bengio, 2020), e.g.,
2It is also called compositional generalization in other literature.
Consciousness Prior (Bengio, 2017). Levels of systematicity were defined (Hadley, 1992; Niklasson & van Gelder, 1994), and types of tests were summarized (Hupkes et al., 2020). We focus on the primary case with an unseen combination of seen factor values.
A closely related field is causal learning, rooted in the eighteenth-century (Hume, 2003) and classical fields of AI (Pearl, 2003). It was mainly explored from statistical perspectives (Pearl, 2009; Peters et al., 2016; Greenland et al., 1999; Pearl, 2018) with do-calculus (Pearl, 1995; 2009) and interventions (Peters et al., 2016). The causation forms Independent Causal Mechanisms (ICMs) (Peters et al., 2017; Schölkopf et al., 2021). Systematic generalization is the counterfactual when the joint input distribution is intervened to have new values with zero probability in training (covariate shift). This work indicates that standard neural networks do not prefer to learn ICMs.
Parallel Distributed Processing (PDP) models (Rumelhart et al., 1986) use Connectionist models with distributed representations, which describe an object in terms of a set of factors. Though they have the potential to combine the factors to create unseen object representations (Hinton, 1990), it was criticized that they do not address systematic generalization in general (Fodor & Pylyshyn, 1988; Marcus, 1998). Deep learning is a recent PDP model with many achievements (LeCun et al., 2015; He et al., 2016). It was studied that deep neural networks use the composition of functions to achieve high performance (Montufar et al., 2014). The improvements in i.i.d. problems encourage to equip deep learning with systematic generalization.
Recent directions In addition to architecture design (Russin et al., 2019; Andreas et al., 2016) and data augmentation (Andreas, 2020; Akyürek et al., 2021; Jia & Liang, 2016), the main perspectives for systematic generalization approaches include disentangled representation learning, attention mechanism, and meta-learning.
Disentangled representation (Bengio et al., 2013) is learned in unsupervised manners. Early methods learn the representation from statistical independence (Higgins et al., 2017; Locatello et al., 2019). Later, the definition of disentangled representation was proposed with symmetry transformation (Higgins et al., 2018). It leads to Symmetry-based Disentangled Representation Learning (Caselles-Dupré et al., 2019; Painter et al., 2020; Pfau et al., 2020). A disentangled representation learning model can be used as a feature extractor for other systematic generalization tasks.
Attention mechanisms are widely used in neural networks (Bahdanau et al., 2015). Transformers (Vaswani et al., 2017) are modern neural network architectures with self-attention. Recurrent Independent Mechanisms (Goyal et al., 2021b) use attention and the name of the incoming nodes for variable binding. Global workspace (Goyal et al., 2021a) improves them by using limited-capacity global communication to enable the exchangeability of knowledge. Discrete-valued communication bottleneck (Liu et al., 2021) further enhances systematic generalization ability.
Meta-learning (Lake, 2019) usually designs a series of training tasks for learning a meta-learner and uses it in a target task. Each task has training and test data, where test data requires systematic generalization from training data. When ICMs are available, they can be used to generate meta-learning tasks (Schölkopf et al., 2021). Meta-reinforcement learning was used for causal reasoning (Dasgupta et al., 2019). Meta-learning can also capture the adaptation speed to discover causal relations (Bengio et al., 2020; Ke et al., 2019).
Deep learning is a fast-growing field, and many efforts focus on designing architectures and algorithms to improve its performance. However, it is less discussed why standard deep learning models do not achieve systematic generalization. This paper looks into a built-in conflict.
6 CONCLUSION
This paper investigates a built-in conflict between deep learning and systematic generalization. It explains one of the reasons why standard neural networks seldom achieve systematic generalization. We hypothesize that the conflict is caused by sharing internal functions, and experiments support it. A model partitions an input space into multiple parts separated by boundaries. The function sharing tends to reuse the boundaries, leading to fewer parts for new outputs, which conflicts with systematic generalization. The phenomena are shown in different standard deep neural networks. We hope this finding provides a new understanding of systematic generalization mechanisms in deep learning and helps to improve machine learning algorithms for a higher level of artificial intelligence.
A MORE EXPERIMENTS
We run experiments with natural inputs. For image data, we use NICO++ dataset (Zhang et al., 2022). We use five foregrounds as the first output label and five backgrounds as the second. For text data, we use Amazon reviews (Ni et al., 2019). We use five categories as the first output label and five ratings as the second. Either dataset has two outputs, each containing five possible classes. There are 25 class combinations, and we separate them into 15 training combinations and 10 test ones in a similar way as in the experiment section. For fully connected networks, we use the Fashion dataset (ten classes) and render with ten colors. Please refer to Figure 6 and Table 1 for examples.
For the image dataset, we aggregated foregrounds into five abstract classes, e.g., mammal and vehicle. It better uses the limited data with annotations on combined labels. We use 72,176 image samples. For text data, we randomly select 100,000 samples for each category, with a length limit of 100 tokens. Data with training combinations are randomly split into training and i.i.d. generalization data with a ratio of 9:1.
The results are in Figure 7. Similar to the results in the experiment section, the test accuracies decrease as there are more shared layers. Also, both the training accuracy and the i.i.d. generalization accuracy do not decrease as much as the test accuracies.
We run ablations. Figure 8 shows the results with different layer widths. Figure 9 shows the results with dropout (Srivastava et al., 2014) and mixup (Zhang et al., 2018). All the results have similar effects as the original experiment.
B PROOFS
Proposition 1 (Seen prediction). From Assumption 2, ∀x ∈ X : f(x) ∈ f(Xtrain).
Proof. We are going to prove that for a function g with at least one input mapped to a new output, there exists a more preferred function f with all inputs mapped to seen outputs. Therefore, if a function does not have other functions more preferred over it, the function follows the proposition.
Given a function g, we construct a function f . We pick a x′ ∈ Xtrain. ∀x ∈ X: g(x) ∈ g(Xtrain) : f(x) = g(x),
o.w. : f(x) = f(x′).
Then, ∀xa, xb ∈ X : g(xa) ∈ g(Xtrain) : g(xa) = g(xb) =⇒ f(xa) = g(xa) = g(xb) = f(xb)
o.w. : g(xa) = g(xb) =⇒ g(xb) ̸∈ g(Xtrain) =⇒ f(xa) = f(x′) = f(xb)
In both cases, g(xa) = g(xb) =⇒ f(xa) = f(xb)
On the other hand, ∃x ∈ Xtest : g(x) ̸∈ g(Xtrain) =⇒ g(x) ̸= g(x′) ∈ g(Xtrain), f(x) = f(x′). So f(x) = f(x′) does not imply g(x) = g(x′). With Assumption 2, f is preferred over g.
Also, ∀x ∈ X: g(x) ∈ g(Xtrain) : ∃x′′ ∈ Xtrain : g(x) = g(x′′) =⇒ f(x) = f(x′′) ∈ f(Xtrain)
o.w. : f(x) = f(x′) ∈ f(Xtrain)
Therefore, ∀x ∈ X : f(x) ∈ f(Xtrain).
Proposition 2 (Seen label). From Assumption 1 and Proposition 1, ∀x ∈ X : f(x) ∈ Ytrain.
Proof. From Assumption 1, f(Xtrain) = Ytrain. From Proposition 1,
∀x ∈ X : f(x) ∈ f(Xtrain) = Ytrain
Theorem 1 (The conflict). From Definition 1 and Proposition 2, ∀(x, y) ∈ Dtest : y ̸= f(x).
Proof. ∀(x, y) ∈ Dtest: y ̸∈ Ytrain (Definition 1), x ∈ Xtest ⊆ X =⇒ f(x) ∈ Ytrain (Proposition 2). Therefore y ̸= f(x).
C A CONJECTURE ON WHY THE FUNCTION SHARING HAPPENS
The neural network training process is complicated, so it is hard to describe what happens. Instead, we have a conjecture.
For a large network, when a boundary is learned for the first time, it separates the problem into two sub-problems, and their learning processes do not influence each other for the rest of the training.
We first explain the idea with an analogy of a binary decision tree (or a hierarchical classification). We then define a boundary and discuss its properties.
Binary Decision tree We consider such a binary decision tree that at each decision node, a label set is separated into two parts, each for a sub-tree. For example, if there are 10 classes, the root node may divide them into 6 and 4 classes. Then the root node of the first sub-tree divides the 6 lasses into two sets of 3 classes. In this decision tree, a node separates input space into two parts with disjoint output labels, and each part is learned separately.
Such a decision tree does not predict new outputs because all leaf nodes predict seen outputs. We discuss neural network training process is similar to creating such a decision tree from some aspects. A decision tree node is a boundary in a neural network.
Boundary We consider a problem P = (X , X, Y ) with input space X , input set X , output set Y . We have ground truth mapping f : X → Y and learned mapping f̂ : X → Y . We define a boundary as follows. Definition 2 (Boundary). Suppose we have a binary partition (Xa,Xb) of an input space X . Xa,Xb are non-empty.
Xa∪̇Xb = X , Xa = X ∩ Xa, Xb = X ∩ Xb, Ya = f(Xa), Yb = f(Xb).
It is a boundary if the output sets are disjoint; for each part, all inputs map to its output set.
Ya∪̇Yb = Y, f̂(Xa) = Ya, f̂(Xb) = Yb.
A boundary separates a problem P = (X , X, Y ) to two sub-problems Pa = (Xa, Xa, Ya) and Pb = (Xb, Xb, Yb). We assume that when the boundary is learned for the first time, the learning processes of Pa and Pb do not influence each other for the rest of the training. Assumption 3 (Separate sub-problems). When a network is large enough, learning one sub-problem does not affect the prediction of another sub-problem.
With this assumption, for a large network, a boundary separates the original problem into two problems whose training processes do not influence each other. The assumption applies to each subproblem. When a problem has only one label, all the inputs are mapped to the label. So the model is learned not to predict an unseen output. This learning process is similar to that of a decision tree.
D EXPERIMENT DETAILS
D.1 VISUALIZATION SETTINGS
The model is a fully connected neural network with two input and two output nodes. It has six hidden layers with ReLU activations, and each hidden layer has eight nodes. We use a mini-batch
size of 10 with a learning rate of 0.01. We iterate until the model prediction becomes stable. Please see the original work of deep playground for more information. We use six Intel(R) Core(TM) i5-8400 2.80GHz CPUs, and the asset has a public license.
D.2 EXPERIMENT SETTINGS
We use GeForce GTX 1080 or GeForce GTX 1050 Ti GPU for single GPU experiments. We use TensorFlow for implementation. The assets have a public license.
Each input element is linearly scaled to [-0.5, 0.5] for image input. We also uniformly sample from this interval for random image input. We select two sentence lengths uniformly from valid integers (one to maximum length) and then generate each word uniformly from the vocabulary for random text input.
Fully Connected Network The input shape is 28 × 28, flattened to a vector. There are seven fully connected layers. Each of them has 512 hidden nodes and ReLU activation. The output has ten nodes and Softmax activation. We use cross-entropy loss and Adam optimizer with a learning rate of 0.001. The batch size is 512, and we train 2,000 iterations. Each evaluation uses 10,000 samples.
Convolutional Network The input shape is 32 × 32 × 3. There are seven convolutional layers. Each of them has 3 × 3 kernel size with 64 channels. Then the layer is flattened. We have a fully connected layer with 128 nodes and ReLU activation. The output layer has ten nodes with Softmax activation. We use cross-entropy loss and Adam optimizer with a learning rate of 0.001. The batch size is 512, and we train 5,000 iterations. Each evaluation uses 10,000 samples.
Residual Network The input is the same as CNN. The model is the standard ResNet50 implementation. The hidden groups are treated as one layer, so there are five hidden layers. The hidden layer size is 64. The output layer has ten nodes with Softmax activation. We use cross-entropy loss and Adam optimizer with a learning rate of 0.001. The batch size is 512, and we train 10,000 iterations. Each evaluation uses 10,000 samples.
Vision Transformer The input is the same as CNN. The model is the standard Vision Transformer implementation with seven hidden layers. The hidden layer size is 64. The output layer has ten nodes with Softmax activation. We use cross-entropy loss and Adam optimizer with a learning rate of 0.001. The batch size is 512, and we train 10,000 iterations. Each evaluation uses 10,000 samples.
LSTM The vocabulary size is 30,977, including a start symbol and padding symbol. The input length is 200. The embedding size is 64. There are seven stacked bidirectional LSTM layers, and each has 32 hidden nodes for each direction. Then the output is flattened. The output layer is a fully-connected layer with ten output nodes and Softmax activation. We use cross-entropy loss and Adam optimizer with a learning rate of 0.001. The batch size is 64, and we train 1,000 iterations. Each evaluation uses 10,000 samples.
Transformer The input is the same as that of LSTM. The embedding size is 64. There are seven hidden groups. The hidden layer size is 64. The output is flattened. The output layer is a fullyconnected layer with ten output nodes and Softmax activation. We use cross-entropy loss and Adam optimizer with a learning rate of 0.001. The batch size is 64, and we train 2,000 iterations. Each evaluation uses 10,000 samples.
D.3 EXPERIMENT RESULTS
We numerically compare the individual (left ends in result figures, 0-max) and shared (right ends, max-0) networks in Table 2. LSTM is the stacked LSTM, and LSTM-1 has only one LSTM layer. It shows that shared network has lower scores than individual network on the three types of accuracy. So it indicates that function sharing avoids systematic generalization.
E ZERO-SHOT LEARNING DATASETS
We look at the results for Zero-shot learning. We use aPY (Farhadi et al., 2009), AwA2 (Xian et al., 2019), CUB (Wah et al., 2011), and SUN (Patterson & Hays, 2012) datasets. In these datasets, factors are mainly related to the input locality (Sylvain et al., 2020). We use the pre-extracted input features for aPY and AwA and image input for CUB and SUN. We construct output labels from attributes. Each attribute is a binary number, 1 for existing in a sample and 0 otherwise. We select six attributes with the most balanced average number (close to 0.5) in all data. The first, third, and fifth attributes are used for the first output, and the others for the second output. Each output has eight classes yielded from all the combinations of three binary attributes. For aPY, samples may share the same image, so we construct training data from the original training and test data from the original test data. For other datasets, we split all data to disjoint training and test data.
We use an eight-layer CNN model, the same as in the experiment section. The batch size is 512 for aPY and AwA and 256 for CUB and SUN. Other settings are the same as that in the experiment section. The result is shown in Figure 10 and Table 4. Similar to the previous experiments, the depths of shared and individual networks reduce systematic generalization capability.
F MORE DISCUSSIONS
F.1 TRAINING PROCESS
We discussed that sharing boundaries reduces the number of partitions and shrinks the area for new outputs (Proposition 1). We run experiments to find when this happens during training. We sample 10,000 inputs from test data, and if an output combination has at least 50 samples, we regard it as a new output (o.o.d.) partition. We plot the number of o.o.d. partitions and the test sample ratio in the o.o.d. partitions for shared and individual networks in Figure 11. The experiment settings follow DNN and CNN settings in the experiment section. The results show that the differences start to happen in early training.
F.2 EQUALLY DIFFICULT FACTORS
We look at the results when two inputs are equally hard to learn. We use two CIFAR-10 datasets for both the first and the second datasets. Since the average of two colored images can cause training data under-fitting, we merge the inputs by concatenating them by channel. It means the input channel is six. The result is shown in Figure 12 and Table 4. Similar to the previous experiments, when the difficulties are equal, the depth of the shared network weakens systematic generalization.
F.3 LABEL COMBINATIONS
We also test other training label distribution types. We design tile and one-shot combinations. In tile, a label combination is for training when Y1 < 5 or Y2 < 5. It is similar to the split for illustrative example. In One-shot, a label combination is for training when Y1 < 9 or Y2 < 1. In such a case, only (9, 0) contains Y2 = 0. It is similar to one-shot learning. The results for the fully connected neural network are in Figure 13 and Table 5. It is similar to the results in the experiment section. | 1. What are the main contributions and findings of the paper regarding systematic generalization in multi-label classifications?
2. How does the paper's investigation of feature sharing and its impact on generalization accuracy contribute to the understanding of deep learning models' inductive biases?
3. What are the strengths and weaknesses of the paper, particularly in terms of its presentation, theoretical analyses, and empirical experiments?
4. How do the paper's conclusions relate to real-world datasets and tasks, and what implications does it have for improving the generalization performance of deep learning models in practical applications?
5. Are there any potential biases or limitations in the paper's experimental design or analysis that could affect its conclusions? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper investigates systematic generalization of multi-label classifications in settings where the input space is shared between the different labels. The authors suggest that deep learning models are biased towards feature reuse, which conflicts with systematic generalization to new combinations of known classes. They explore this hypothesis with theoretical analyses under toy assumptions, and empirical experiments across a variety of architectures. Generally, they find that more sharing of features reduces generalization accuracy in their settings.
Strengths And Weaknesses
Strengths:
The questions posed are interesting.
I appreciate the breadth of architectures considered, especially sharing different numbers of layers.
Weaknesses:
The way the paper is presented fundamentally ignores the No Free Lunch theorem [e.g. Adam, 2019]. There is no system that can generalize perfectly on every task and training dataset—there cannot be a conflict between an architecture class and systematic generalization writ large. We have to ask the question of how the inductive biases of the model class fit the class of tasks we are interested in solving.
DL researchers are well aware of the feature sharing bias—it forms the basis of auxiliary task training and/or pretraining methods, as the authors note. The reason such methods tend to improve generalization is because sharing features is useful on real-world datasets. For example, the input features learned solve masked language modeling tasks empirically improve systematic generalization performance substantially even on tasks like SCAN and CFQ [Furrer et al., 2020]. There are even theoretical accounts of why feature sharing can improve generalization in the presence of noise [e.g. Lampinen et al, 2019].
The datasets used in the paper are therefore cleverly created to make feature sharing an actively harmful strategy. But to do so, the authors rely on essentially adversarial dataset design, where they combine input stimuli in very unnatural ways (averaging images, or concatenating completely unrelated pieces of text), and then enforce extremely strong correlations between these inputs at train time, which are completely reversed at test time. There is no reason given to think that this process has anything to do with any real-world data generating process.
Therefore, I would challenge the authors to demonstrate real-world tasks and datasets, not artificially, adversarially created ones, in which their observations apply.
Otherwise, it seems to me that feature sharing is a feature, not a bug of deep learning. Nobody has ever claimed that deep learning is capable of generalizing systematically in every task anyone can come up with—that would violate the NFL theorem. But I’d argue that the DL family is empirically the most successful system for generalizing on real world datasets, in part because of feature sharing.
Architectures and training paradigms:
“We choose a layer and duplicate the following layers, keeping the number of all hidden nodes in each layer if feasible” — it is not clear to me whether this means that each “branch” of the architecture has the same number of nodes as before, or half the number of nodes. If the former, the number of parameters will be larger in networks that split earlier, thus confounding the comparison (since overparameterized models tend to generalize better).
More generally, it would be interesting to see the impact of parameterization on these effects—one might expect somewhat less feature sharing in wider networks, for instance, though it’s unclear how strong the effect would be.
And it would be interesting to see the effect of methods like dropout [Srivastava et al., 2014] or mixup [Zhang et al., 2017] which are known to improve generalization.
References
Adam, Stavros P., et al. "No free lunch theorem: A review." Approximation and optimization (2019): 57-82.
Furrer, D., van Zee, M., Scales, N., & Schärli, N. (2020). Compositional generalization in semantic parsing: Pre-training vs. specialized architectures. arXiv preprint arXiv:2007.08970.
Lampinen, A. K., & Ganguli, S. (2019). An analytic theory of generalization dynamics and transfer learning in deep linear networks. In International Conference on Learning Representations.
Srivastava, Nitish, et al. "Dropout: a simple way to prevent neural networks from overfitting." The journal of machine learning research 15.1 (2014): 1929-1958.
Zhang, H., Cisse, M., Dauphin, Y. N., & Lopez-Paz, D. (2017). mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412.
Clarity, Quality, Novelty And Reproducibility
The clarity of the paper could be improved. For example:
Showing examples of the task stimuli in the paper—particularly the visual ones—would I think help to emphasize how unnatural the tasks are.
The data preparation could be rewritten to be clearer, by first specifying that the data generating process goes from sampling a pair of labels to sampling the corresponding input.
Split architecture details were unclear to me (noted above).
There is some originality and quality if the above weaknesses are addressed. |
ICLR | Title
On a Built-in Conflict between Deep Learning and Systematic Generalization
Abstract
Out-of-distribution or systematic generalization is a desirable property that most deep learning algorithms lack. In this paper, we hypothesize that internal function sharing is one of the reasons to weaken systematic generalization in deep learning for classification tasks. Under equivalent prediction, a model partitions an input space into multiple parts separated by boundaries. The function sharing prefers to reuse boundaries, leading to fewer parts for new outputs, which conflicts with systematic generalization. We show such phenomena in standard deep learning models, such as fully connected, convolutional, residual networks, LSTMs, and (Vision) Transformers. We hope this study provides novel insights and forms a basis for new research directions to improve systematic generalization. Source codes are available in the supplementary material.
N/A
Out-of-distribution or systematic generalization is a desirable property that most deep learning algorithms lack. In this paper, we hypothesize that internal function sharing is one of the reasons to weaken systematic generalization in deep learning for classification tasks. Under equivalent prediction, a model partitions an input space into multiple parts separated by boundaries. The function sharing prefers to reuse boundaries, leading to fewer parts for new outputs, which conflicts with systematic generalization. We show such phenomena in standard deep learning models, such as fully connected, convolutional, residual networks, LSTMs, and (Vision) Transformers. We hope this study provides novel insights and forms a basis for new research directions to improve systematic generalization. Source codes are available in the supplementary material.
1 INTRODUCTION
A fundamental property of artificial intelligence is generalization, where a trained model appropriately processes unseen test samples. Many problems adopt the i.i.d. assumption. On the other hand, out-of-distribution (o.o.d.) or systematic generalization (Fodor & Pylyshyn, 1988; Lake & Baroni, 2018) requires that training and test distributions are disjoint so that test samples have zero probability in training distribution. Systematic generalization is crucial for human learning and supports efficient data use and creativity. Therefore, machines are encouraged to acquire such generalization ability to achieve human-like intelligence.
Systematic generalization usually requires that a sample has multiple explanatory factors of variation (Bengio et al., 2013), and the generalization is enabled by producing an unseen combination of seen factor values. For example, models trained on blue rectangles and green triangles predict blue triangles. We adopt factors mainly in designing experiments and developing intuitions. It helps experiments because new outputs are only related to function sharing between factors (Section 3). So we limit our claim to the cases for recombination of factors.
One stream of artificial intelligence is Connectionism (Feldman & Ballard, 1982; Rumelhart et al., 1986), which uses many simple neuron-like units richly interconnected and processed in parallel. It was criticized that Connectionist models do not support systematic generalization well (Fodor & Pylyshyn, 1988; Marcus, 1998). Deep learning (LeCun et al., 2015) originates from Connectionism, and various techniques have enabled multiple-layer modelings and improved performance on i.i.d. problems in recent years. Also, specific algorithms have been proposed to equip deep learning with systematic generalization ability (Russin et al., 2019; Lake, 2019). However, less discussion has been made on why standard deep learning models do not achieve systematic generalization.
This paper addresses the above question by looking into a built-in characteristic of deep learning. A node (or a set of nodes) is a function of the network input. By function sharing, we mean nodes in one layer share inputs from nodes in the previous layer. It may also be called activation sharing or feature sharing. We hypothesize that function sharing is one of the reasons to prevent systematic generalization in deep learning. Under equivalent prediction, a classification network partitions an input space into multiple parts separated by boundaries. Function sharing prefers to reuse boundaries and avoid redundant ones for training predictions. It leads to fewer parts for new outputs and weakens systematic generalization. The nearest neighbor classifier is an analogy for this effect because it predicts a training sample output for any test sample, so it does not have a part for new outputs. We also discuss that function sharing belongs naturally to deep learning (Section 4).
Figure 2 has an intuitive example explaining why the conflict happens. A test sample (in orange) equals a set of training samples (+/+) on
the first output. Then they are also equal on the second output if the function is reused (see caption). Therefore, they are equal in all the outputs, which conflicts with systematic generalization. More generally, the two boundaries jointly partition an input space into multiple parts, and deep learning prefers (a) because it has fewer parts than (b). Figure 3 has a visualized example. It is similar to the example in Figure 2a. The two functions are shared in the top-right region, and few inputs are predicted as the new combination. Figure 1 has a simplified plot for a result in the experiment section. As the degree of function sharing increases (more sharing layers), the accuracy of the test dataset or generalization capacity decreases accordingly. It supports that function sharing weakens systematic generalization. Please refer to Section 3 for more details.
This paper contributes to uncovering a built-in conflict between deep learning and systematic generalization. We hope this study provides novel insights, forms a basis for new research directions, and helps improve machine intelligence to the human level.
2 A BUILT-IN CONFLICT
We hypothesize a built-in conflict between the function sharing in deep learning and systematic generalization. We cover the definition, assumptions, and derivations of the conflict.
2.1 SYSTEMATIC GENERALIZATION
We have input set X and output set Y . Y is decided by K factors Y1, . . . , YK . The training set Dtrain has input Xtrain and output Ytrain. The test set Dtest has input Xtest and output Ytest. In systematic generalization, Ytrain and Ytest are disjoint. However, the values for each factor i are included in the training output. A model f maps input X to the prediction of output f(X). A model enables systematic generalization if it correctly predicts the ground-truth test outputs. Definition 1 (Systematic generalization). The dataset requires that any test label is not a training label, but each factor of a test label is seen in a training label.
∀(x, y) ∈ Dtest : y ̸∈ Ytrain, ∀i = 1, . . . ,K, ∃y′ ∈ Ytrain : y′i = yi A function f enables systematic generalization if ∀(x, y) ∈ Dtest : y = f(x).
When the model is well trained, we assume it correctly predicts training samples. Assumption 1 (Correct training prediction). ∀(x, y) ∈ Dtrain : y = f(x).
2.2 FUNCTION SHARING
In Figure 2, deep learning prefers the model in (a) over the model in (b). We denote the models as f and g (mapping from input to output), respectively. We see that g has one more region for the new factor combination between the magenta and cyan boundaries. This region exists because the magenta boundary in g splits the +/+ region in f into two parts. It means the partition under g refines that under f , but not vice versa.
We assume a property of function sharing. For general models f and g, deep learning prefers f over g more or equally if the partition under g is a refinement of that under f . Equivalently, if two inputs have an identical prediction in g, their predictions are still equal in f . It means an equal prediction in g implies that in f . Assumption 2 (Function sharing). Deep learning prefers f over g more or equally if
∀xa, xb ∈ X : g(xa) = g(xb) =⇒ f(xa) = f(xb)
Note that it is a bias in the learning process. In case we need f to be strictly more preferred over g, we can additionally show that identical prediction in f does not imply that in g.
We consider what causes the function sharing. Intuitively, the preference comes from greedy optimization and the task requirement to learn a complicated model. A model learns to split or partition inputs by different predictions. Some splits (can be a factor) might be easier to learn than others. For example, learning to split inputs by color is more straightforward than by shape. Then the greedy optimization learns it quickly and reuses its intermediate functions to learn other splits. It is not likely that multiple splits (or factors) are learned equally fast during the whole training process.
The mechanism of reusing function is similar to auxiliary tasks. The parameters are updated to address a complicated main task while quickly learning and keeping the prediction ability for more straightforward auxiliary tasks. A more common but less similar example is pre-training. Pretrained modules, such as image feature extractors or word embeddings, share the internal functions with the main target tasks. However, the pre-training and the main task training do not happen simultaneously. We will discuss more insights in Section 4 and Appendix C.
2.3 THE CONFLICT
We derive propositions for proving the theorem and explaining the reasons for the phenomena. In practice, the assumptions may not hold strictly, and the effects come more from the soft biases of the conclusions. The proofs are in Appendix B.
Assumption 2 leads a model to predict training outputs for any input.
Proposition 1 (Seen prediction). From Assumption 2, ∀x ∈ X : f(x) ∈ f(Xtrain).
Informally, suppose there is a non-empty set of all inputs that a function f does not map to training outputs f(Xtrain). In that case, we can design another function f ′ that predicts a training output for these inputs and keeps predictions for other inputs. Then both f and f ′ equivalently distinguish training outputs. With Assumption 2, f ′ is preferred, hence ∀x ∈ X : f(x) ∈ f(Xtrain). It does not apply Assumption 1, indicating that the phenomena may happen before a model is well trained.
If a model performs well for the training set, it predicts training ground-truth output for any input.
Proposition 2 (Seen label). From Assumption 1 and Proposition 1, ∀x ∈ X : f(x) ∈ Ytrain.
It says that the prediction is not any new output. It is a stronger argument than avoiding a particular new output for each input in systematic generalization. It explains that prediction is incorrect because any new output is resisted. We evaluate it in Section 3.
We then look at the conflict. For any test sample (x, y), the definition of systematic generalization requires that the output y is not a training output Ytrain. However, this contradicts Proposition 2.
Theorem 1 (The conflict). From Definition 1 and Proposition 2, ∀(x, y) ∈ Dtest : y ̸= f(x).
We covered systematic generalization definition and function sharing assumption. We then derived propositions and a theorem of the conflict.
3 EXPERIMENTS
We run experiments to show that function sharing reduces the ability of systematic generalization in deep learning. We not only compare sharing or not but also adjust the degree of sharing. We focus on the cases where new outputs are unseen combinations of seen factor values. We cover different standard deep neural network models. The details of networks and experiments can be found in Appendix D. More experiments with natural inputs are in Appendix A. The results of zero-shot learning datasets are in Appendix E, where factors are mainly related to the input locality. We look at the experiment settings and results.
3.1 SETTINGS
Data preparation We construct a dataset from two ten-class classification datasets. The training data are generated from the original training dataset. We first chose output y and chose x based on it. y1 is chosen from all possible labels. y2 is chosen from five classes {y1, y1 + 1, . . . , y1 + 4} (we use modular for labels). The test data are generated from the original test dataset. y1 is chosen in the same way as in training. y2 is chosen from the other classes {y1+5, y1+6, . . . , y1+9}. In this design, training and test label combinations are mutually exclusive, but test labels for each output factor are seen in training. Any factor label appears evenly in training and test combinations. x1 and x2 are chosen conditioned on their labels y1, y2, respectively, and merged as the input x. The original datasets and input merge methods vary for each experiment. All the choices follow uniform distributions.
Architecture To evaluate the influence of function sharing on new outputs, we change the function sharing ability while keeping other properties stable for a deep learning model. So we modify function sharing related to new outputs. Since a new output appears only as a new factor combination in our setting, we adjust the sharings between output factors. It does not need to remove all sharings and avoid difficulties in model design. We choose a layer and duplicate the following layers, keeping the number of all hidden nodes in each layer if feasible (Figure 4). We call the former part a shared network and the latter parts individual networks. Each individual network predicts one output factor, so the output is disentangled. We will discuss entangled outputs in Section 4. We keep the depth of the whole network and change the depth of the shared network and individual networks. Note that only sharing the input layer indicates learning two separate models.
Evaluation metrics We use accuracy as the metric. A sample prediction is correct if all the outputs are correct. We have three types of accuracy. The first is the regular evaluation of test data
for systematic generalization (a: Test Sample Accuracy) corresponding to Theorem 1. We also consider a set of inputs mapped to unseen output combinations corresponding to Proposition 2. We evaluate whether the test samples predict one of the unseen output factor combinations (b: Test Set Accuracy). However, test samples are only a subset of input space. If a model learns a different set of factors, the expected inputs may not be those of test samples. So we also evaluate systematic generalization as a model property for any valid input. We randomly draw test inputs from the whole input space (c: Random Set Accuracy)1. We run each experiment five times and plot the mean and the standard deviation (Figure 5). The result numbers are in Table 2 (Appendix D.3).
a : E(x,y)∼P (Dtest)[δ(f(x) = y)] b : Ex∼P (Xtest)[δ(f(x) ∈ Ytest)] c : Ex∼U [X ][δ(f(x) ∈ Ytest)]
3.2 RESULTS
Fully Connected Network We use an eight-layer fully connected neural network with a flattened image input. We use the Fashion dataset (Xiao et al., 2017) and the MNIST dataset (LeCun et al., 1998). The datasets are uncomplicated to avoid the training data under-fitting for a fully connected neural network. We merge the two inputs by averaging values at each input node.
Convolutional Network We use a convolutional neural network with six convolutional layers and two fully connected layers. We use the CIFAR-10 dataset (Krizhevsky, 2009) and the Fashion dataset (Xiao et al., 2017). We scale the input sizes to that of the larger one and broadcast gray images to colored ones. We merge the inputs by averaging at each node. We use the Fashion dataset as one factor because the average of two colored images can cause training data under-fitting for convolutional neural networks.
Residual Network We use ResNet50 (He et al., 2016), which has five stages, each treated as a layer while changing the shared network depth. It has the dataset setting in the CNN experiment.
Vision Transformer We use Vision Transformer (Dosovitskiy et al., 2021) with one fully connected layer for each patch, five attention layers, and two fully connected layers. We treat the patches as one layer. It has the dataset setting in the CNN experiment.
LSTM A recurrent network has the same parameters for each layer, so it does not support learning different individual networks. Instead, we treat an LSTM as a layer. We use stacked LSTM models with an embedding layer, five bidirectional LSTM layers, and two fully connected layers. We use the Reuters dataset (Joachims, 1998) for both the first and the second datasets. We filter samples by a maximum input length of 200 and use the most frequent ten classes of samples. We merge inputs by concatenating two input texts. The inputs have different lengths because the text lengths vary.
We also run experiments for one-layer LSTM, which compares sharing or not sharing all layers. The results indicate that the shared network has less generalization than the individual network (Table 2).
1δ(·) is 1 if the statement is true and 0 otherwise. U [X ] is the uniform distribution of valid inputs.
Transformer We use Transformer (Vaswani et al., 2017). Since it is a classification problem, we only use the encoder. It has one embedding layer, five hidden layers, and two fully connected layers. We use the same dataset setting as the LSTM experiment.
Summary of results Figure 5 shows that, for each evaluation, the accuracy on the left end (not sharing any hidden layer) is higher than that on the right end (sharing all hidden layers), and it generally decreases as the shared network depth increases. The results indicate that the function sharing weakens systematic generalization.
4 DISCUSSIONS
4.1 ENTANGLED OUTPUT
Though the theory does not require disentangled output, the experiments use it to split the network into individual ones. We discuss that entangled output prediction is not easier than disentangled one. Hence the experiment conclusions will likely extend to entangled outputs. (1) Disentangled output is a particular and usually less complicated case of entangled output. (2) Entangled output can be seen as another shared layer, and the experiments show that increasing shared layers reduces systematic generalization ability, so entangled outputs are also likely to suffer from the problem. (3) We can see each output node of an entangled output as a factor, so it becomes disentangled output. The generalization is even more difficult if a value is unseen for an output node.
4.2 BEYOND FACTOR RECOMBINATION
The definition of systematic generalization (Definition 1) requires that each test label factor is seen in a training label. However, it is not directly used in the derivations, and the conclusions may apply to more general o.o.d. problems beyond recombining factors. A new output may correspond to an unseen activation for an output node, e.g., a new class in a classification problem. In such settings, it is sometimes discussed that the bias parameter of the output is a reason to avoid the new value prediction because it does not have any training signal to increase its value. This work provides another reason for not predicting a new value.
4.3 WHY DOES FUNCTION SHARING HAPPEN IN DEEP LEARNING?
We discuss that function sharing can be caused by characteristics of deep learning: deep architecture, shareable network, and greedy optimization. Deep architecture and shareable networks make it possible for factors to share elaborated functions, and greedy optimization encourages the sharing. Deep architecture and greedy search are necessary for deep learning. Deep learning uses deep architecture to fit complicated non-linear functions. Deep learning has a large and complex parameter space. To search in it, we need some prioritization, which leads to a greedy search. The shareable network is widely used in standard deep learning models and works well for i.i.d. problems. However, it is less critical compared to the other ones.
4.4 POTENTIAL SOLUTIONS
We consider possible solutions to avoid function sharing and achieve systematic generalization. From the above discussion, we look at shareable networks. We consider the recombination of factors and focus on sharing between factors. Then, one potential solution uses individual networks for output factors, similar to the experiment setup. We discuss how to design networks when the input or the output is entangled. If the output is entangled, we can design an architecture where each individual network only changes one factor in the output. For example, one individual network changes color, and another changes shape. If the input is entangled, we need to extract factors from it to feed the individual networks. It contains two questions: how to avoid spurious influence from other factors and keep it working in test distribution. We can bottleneck representations for the first one and divide the input into units invariant in training and test for the other, e.g., words or objects.
5 RELATED WORK
Systematic generalization and deep learning Systematic generalization2 (Fodor & Pylyshyn, 1988; Lake & Baroni, 2018; Bahdanau et al., 2019) is considered the “Great Move” of evolution, caused by the need to process an increasing amount and diversity of environmental information (Newell, 1990). Cognitive scientists see it as central for an organism to view the world (Gallistel & King, 2011). Studies indicate it is related to the prefrontal cortex (Robin & Holyoak, 1995). It was discussed that commonsense is critical (Mccarthy, 1959; Lenat et al., 1986) for systematic generalization, and recent works aim to find general prior knowledge (Goyal & Bengio, 2020), e.g.,
2It is also called compositional generalization in other literature.
Consciousness Prior (Bengio, 2017). Levels of systematicity were defined (Hadley, 1992; Niklasson & van Gelder, 1994), and types of tests were summarized (Hupkes et al., 2020). We focus on the primary case with an unseen combination of seen factor values.
A closely related field is causal learning, rooted in the eighteenth-century (Hume, 2003) and classical fields of AI (Pearl, 2003). It was mainly explored from statistical perspectives (Pearl, 2009; Peters et al., 2016; Greenland et al., 1999; Pearl, 2018) with do-calculus (Pearl, 1995; 2009) and interventions (Peters et al., 2016). The causation forms Independent Causal Mechanisms (ICMs) (Peters et al., 2017; Schölkopf et al., 2021). Systematic generalization is the counterfactual when the joint input distribution is intervened to have new values with zero probability in training (covariate shift). This work indicates that standard neural networks do not prefer to learn ICMs.
Parallel Distributed Processing (PDP) models (Rumelhart et al., 1986) use Connectionist models with distributed representations, which describe an object in terms of a set of factors. Though they have the potential to combine the factors to create unseen object representations (Hinton, 1990), it was criticized that they do not address systematic generalization in general (Fodor & Pylyshyn, 1988; Marcus, 1998). Deep learning is a recent PDP model with many achievements (LeCun et al., 2015; He et al., 2016). It was studied that deep neural networks use the composition of functions to achieve high performance (Montufar et al., 2014). The improvements in i.i.d. problems encourage to equip deep learning with systematic generalization.
Recent directions In addition to architecture design (Russin et al., 2019; Andreas et al., 2016) and data augmentation (Andreas, 2020; Akyürek et al., 2021; Jia & Liang, 2016), the main perspectives for systematic generalization approaches include disentangled representation learning, attention mechanism, and meta-learning.
Disentangled representation (Bengio et al., 2013) is learned in unsupervised manners. Early methods learn the representation from statistical independence (Higgins et al., 2017; Locatello et al., 2019). Later, the definition of disentangled representation was proposed with symmetry transformation (Higgins et al., 2018). It leads to Symmetry-based Disentangled Representation Learning (Caselles-Dupré et al., 2019; Painter et al., 2020; Pfau et al., 2020). A disentangled representation learning model can be used as a feature extractor for other systematic generalization tasks.
Attention mechanisms are widely used in neural networks (Bahdanau et al., 2015). Transformers (Vaswani et al., 2017) are modern neural network architectures with self-attention. Recurrent Independent Mechanisms (Goyal et al., 2021b) use attention and the name of the incoming nodes for variable binding. Global workspace (Goyal et al., 2021a) improves them by using limited-capacity global communication to enable the exchangeability of knowledge. Discrete-valued communication bottleneck (Liu et al., 2021) further enhances systematic generalization ability.
Meta-learning (Lake, 2019) usually designs a series of training tasks for learning a meta-learner and uses it in a target task. Each task has training and test data, where test data requires systematic generalization from training data. When ICMs are available, they can be used to generate meta-learning tasks (Schölkopf et al., 2021). Meta-reinforcement learning was used for causal reasoning (Dasgupta et al., 2019). Meta-learning can also capture the adaptation speed to discover causal relations (Bengio et al., 2020; Ke et al., 2019).
Deep learning is a fast-growing field, and many efforts focus on designing architectures and algorithms to improve its performance. However, it is less discussed why standard deep learning models do not achieve systematic generalization. This paper looks into a built-in conflict.
6 CONCLUSION
This paper investigates a built-in conflict between deep learning and systematic generalization. It explains one of the reasons why standard neural networks seldom achieve systematic generalization. We hypothesize that the conflict is caused by sharing internal functions, and experiments support it. A model partitions an input space into multiple parts separated by boundaries. The function sharing tends to reuse the boundaries, leading to fewer parts for new outputs, which conflicts with systematic generalization. The phenomena are shown in different standard deep neural networks. We hope this finding provides a new understanding of systematic generalization mechanisms in deep learning and helps to improve machine learning algorithms for a higher level of artificial intelligence.
A MORE EXPERIMENTS
We run experiments with natural inputs. For image data, we use NICO++ dataset (Zhang et al., 2022). We use five foregrounds as the first output label and five backgrounds as the second. For text data, we use Amazon reviews (Ni et al., 2019). We use five categories as the first output label and five ratings as the second. Either dataset has two outputs, each containing five possible classes. There are 25 class combinations, and we separate them into 15 training combinations and 10 test ones in a similar way as in the experiment section. For fully connected networks, we use the Fashion dataset (ten classes) and render with ten colors. Please refer to Figure 6 and Table 1 for examples.
For the image dataset, we aggregated foregrounds into five abstract classes, e.g., mammal and vehicle. It better uses the limited data with annotations on combined labels. We use 72,176 image samples. For text data, we randomly select 100,000 samples for each category, with a length limit of 100 tokens. Data with training combinations are randomly split into training and i.i.d. generalization data with a ratio of 9:1.
The results are in Figure 7. Similar to the results in the experiment section, the test accuracies decrease as there are more shared layers. Also, both the training accuracy and the i.i.d. generalization accuracy do not decrease as much as the test accuracies.
We run ablations. Figure 8 shows the results with different layer widths. Figure 9 shows the results with dropout (Srivastava et al., 2014) and mixup (Zhang et al., 2018). All the results have similar effects as the original experiment.
B PROOFS
Proposition 1 (Seen prediction). From Assumption 2, ∀x ∈ X : f(x) ∈ f(Xtrain).
Proof. We are going to prove that for a function g with at least one input mapped to a new output, there exists a more preferred function f with all inputs mapped to seen outputs. Therefore, if a function does not have other functions more preferred over it, the function follows the proposition.
Given a function g, we construct a function f . We pick a x′ ∈ Xtrain. ∀x ∈ X: g(x) ∈ g(Xtrain) : f(x) = g(x),
o.w. : f(x) = f(x′).
Then, ∀xa, xb ∈ X : g(xa) ∈ g(Xtrain) : g(xa) = g(xb) =⇒ f(xa) = g(xa) = g(xb) = f(xb)
o.w. : g(xa) = g(xb) =⇒ g(xb) ̸∈ g(Xtrain) =⇒ f(xa) = f(x′) = f(xb)
In both cases, g(xa) = g(xb) =⇒ f(xa) = f(xb)
On the other hand, ∃x ∈ Xtest : g(x) ̸∈ g(Xtrain) =⇒ g(x) ̸= g(x′) ∈ g(Xtrain), f(x) = f(x′). So f(x) = f(x′) does not imply g(x) = g(x′). With Assumption 2, f is preferred over g.
Also, ∀x ∈ X: g(x) ∈ g(Xtrain) : ∃x′′ ∈ Xtrain : g(x) = g(x′′) =⇒ f(x) = f(x′′) ∈ f(Xtrain)
o.w. : f(x) = f(x′) ∈ f(Xtrain)
Therefore, ∀x ∈ X : f(x) ∈ f(Xtrain).
Proposition 2 (Seen label). From Assumption 1 and Proposition 1, ∀x ∈ X : f(x) ∈ Ytrain.
Proof. From Assumption 1, f(Xtrain) = Ytrain. From Proposition 1,
∀x ∈ X : f(x) ∈ f(Xtrain) = Ytrain
Theorem 1 (The conflict). From Definition 1 and Proposition 2, ∀(x, y) ∈ Dtest : y ̸= f(x).
Proof. ∀(x, y) ∈ Dtest: y ̸∈ Ytrain (Definition 1), x ∈ Xtest ⊆ X =⇒ f(x) ∈ Ytrain (Proposition 2). Therefore y ̸= f(x).
C A CONJECTURE ON WHY THE FUNCTION SHARING HAPPENS
The neural network training process is complicated, so it is hard to describe what happens. Instead, we have a conjecture.
For a large network, when a boundary is learned for the first time, it separates the problem into two sub-problems, and their learning processes do not influence each other for the rest of the training.
We first explain the idea with an analogy of a binary decision tree (or a hierarchical classification). We then define a boundary and discuss its properties.
Binary Decision tree We consider such a binary decision tree that at each decision node, a label set is separated into two parts, each for a sub-tree. For example, if there are 10 classes, the root node may divide them into 6 and 4 classes. Then the root node of the first sub-tree divides the 6 lasses into two sets of 3 classes. In this decision tree, a node separates input space into two parts with disjoint output labels, and each part is learned separately.
Such a decision tree does not predict new outputs because all leaf nodes predict seen outputs. We discuss neural network training process is similar to creating such a decision tree from some aspects. A decision tree node is a boundary in a neural network.
Boundary We consider a problem P = (X , X, Y ) with input space X , input set X , output set Y . We have ground truth mapping f : X → Y and learned mapping f̂ : X → Y . We define a boundary as follows. Definition 2 (Boundary). Suppose we have a binary partition (Xa,Xb) of an input space X . Xa,Xb are non-empty.
Xa∪̇Xb = X , Xa = X ∩ Xa, Xb = X ∩ Xb, Ya = f(Xa), Yb = f(Xb).
It is a boundary if the output sets are disjoint; for each part, all inputs map to its output set.
Ya∪̇Yb = Y, f̂(Xa) = Ya, f̂(Xb) = Yb.
A boundary separates a problem P = (X , X, Y ) to two sub-problems Pa = (Xa, Xa, Ya) and Pb = (Xb, Xb, Yb). We assume that when the boundary is learned for the first time, the learning processes of Pa and Pb do not influence each other for the rest of the training. Assumption 3 (Separate sub-problems). When a network is large enough, learning one sub-problem does not affect the prediction of another sub-problem.
With this assumption, for a large network, a boundary separates the original problem into two problems whose training processes do not influence each other. The assumption applies to each subproblem. When a problem has only one label, all the inputs are mapped to the label. So the model is learned not to predict an unseen output. This learning process is similar to that of a decision tree.
D EXPERIMENT DETAILS
D.1 VISUALIZATION SETTINGS
The model is a fully connected neural network with two input and two output nodes. It has six hidden layers with ReLU activations, and each hidden layer has eight nodes. We use a mini-batch
size of 10 with a learning rate of 0.01. We iterate until the model prediction becomes stable. Please see the original work of deep playground for more information. We use six Intel(R) Core(TM) i5-8400 2.80GHz CPUs, and the asset has a public license.
D.2 EXPERIMENT SETTINGS
We use GeForce GTX 1080 or GeForce GTX 1050 Ti GPU for single GPU experiments. We use TensorFlow for implementation. The assets have a public license.
Each input element is linearly scaled to [-0.5, 0.5] for image input. We also uniformly sample from this interval for random image input. We select two sentence lengths uniformly from valid integers (one to maximum length) and then generate each word uniformly from the vocabulary for random text input.
Fully Connected Network The input shape is 28 × 28, flattened to a vector. There are seven fully connected layers. Each of them has 512 hidden nodes and ReLU activation. The output has ten nodes and Softmax activation. We use cross-entropy loss and Adam optimizer with a learning rate of 0.001. The batch size is 512, and we train 2,000 iterations. Each evaluation uses 10,000 samples.
Convolutional Network The input shape is 32 × 32 × 3. There are seven convolutional layers. Each of them has 3 × 3 kernel size with 64 channels. Then the layer is flattened. We have a fully connected layer with 128 nodes and ReLU activation. The output layer has ten nodes with Softmax activation. We use cross-entropy loss and Adam optimizer with a learning rate of 0.001. The batch size is 512, and we train 5,000 iterations. Each evaluation uses 10,000 samples.
Residual Network The input is the same as CNN. The model is the standard ResNet50 implementation. The hidden groups are treated as one layer, so there are five hidden layers. The hidden layer size is 64. The output layer has ten nodes with Softmax activation. We use cross-entropy loss and Adam optimizer with a learning rate of 0.001. The batch size is 512, and we train 10,000 iterations. Each evaluation uses 10,000 samples.
Vision Transformer The input is the same as CNN. The model is the standard Vision Transformer implementation with seven hidden layers. The hidden layer size is 64. The output layer has ten nodes with Softmax activation. We use cross-entropy loss and Adam optimizer with a learning rate of 0.001. The batch size is 512, and we train 10,000 iterations. Each evaluation uses 10,000 samples.
LSTM The vocabulary size is 30,977, including a start symbol and padding symbol. The input length is 200. The embedding size is 64. There are seven stacked bidirectional LSTM layers, and each has 32 hidden nodes for each direction. Then the output is flattened. The output layer is a fully-connected layer with ten output nodes and Softmax activation. We use cross-entropy loss and Adam optimizer with a learning rate of 0.001. The batch size is 64, and we train 1,000 iterations. Each evaluation uses 10,000 samples.
Transformer The input is the same as that of LSTM. The embedding size is 64. There are seven hidden groups. The hidden layer size is 64. The output is flattened. The output layer is a fullyconnected layer with ten output nodes and Softmax activation. We use cross-entropy loss and Adam optimizer with a learning rate of 0.001. The batch size is 64, and we train 2,000 iterations. Each evaluation uses 10,000 samples.
D.3 EXPERIMENT RESULTS
We numerically compare the individual (left ends in result figures, 0-max) and shared (right ends, max-0) networks in Table 2. LSTM is the stacked LSTM, and LSTM-1 has only one LSTM layer. It shows that shared network has lower scores than individual network on the three types of accuracy. So it indicates that function sharing avoids systematic generalization.
E ZERO-SHOT LEARNING DATASETS
We look at the results for Zero-shot learning. We use aPY (Farhadi et al., 2009), AwA2 (Xian et al., 2019), CUB (Wah et al., 2011), and SUN (Patterson & Hays, 2012) datasets. In these datasets, factors are mainly related to the input locality (Sylvain et al., 2020). We use the pre-extracted input features for aPY and AwA and image input for CUB and SUN. We construct output labels from attributes. Each attribute is a binary number, 1 for existing in a sample and 0 otherwise. We select six attributes with the most balanced average number (close to 0.5) in all data. The first, third, and fifth attributes are used for the first output, and the others for the second output. Each output has eight classes yielded from all the combinations of three binary attributes. For aPY, samples may share the same image, so we construct training data from the original training and test data from the original test data. For other datasets, we split all data to disjoint training and test data.
We use an eight-layer CNN model, the same as in the experiment section. The batch size is 512 for aPY and AwA and 256 for CUB and SUN. Other settings are the same as that in the experiment section. The result is shown in Figure 10 and Table 4. Similar to the previous experiments, the depths of shared and individual networks reduce systematic generalization capability.
F MORE DISCUSSIONS
F.1 TRAINING PROCESS
We discussed that sharing boundaries reduces the number of partitions and shrinks the area for new outputs (Proposition 1). We run experiments to find when this happens during training. We sample 10,000 inputs from test data, and if an output combination has at least 50 samples, we regard it as a new output (o.o.d.) partition. We plot the number of o.o.d. partitions and the test sample ratio in the o.o.d. partitions for shared and individual networks in Figure 11. The experiment settings follow DNN and CNN settings in the experiment section. The results show that the differences start to happen in early training.
F.2 EQUALLY DIFFICULT FACTORS
We look at the results when two inputs are equally hard to learn. We use two CIFAR-10 datasets for both the first and the second datasets. Since the average of two colored images can cause training data under-fitting, we merge the inputs by concatenating them by channel. It means the input channel is six. The result is shown in Figure 12 and Table 4. Similar to the previous experiments, when the difficulties are equal, the depth of the shared network weakens systematic generalization.
F.3 LABEL COMBINATIONS
We also test other training label distribution types. We design tile and one-shot combinations. In tile, a label combination is for training when Y1 < 5 or Y2 < 5. It is similar to the split for illustrative example. In One-shot, a label combination is for training when Y1 < 9 or Y2 < 1. In such a case, only (9, 0) contains Y2 = 0. It is similar to one-shot learning. The results for the fully connected neural network are in Figure 13 and Table 5. It is similar to the results in the experiment section. | 1. What is the main contribution of the paper, and how does it advance the field of deep learning?
2. What are the strengths and weaknesses of the paper, particularly regarding its writing, propositions, theorem, experiments, and clarity?
3. How does the reviewer assess the novelty and reproducibility of the paper's content?
4. Are there any specific questions or points that the reviewer raises regarding the paper's hypotheses, assumptions, or conclusions? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper hypothesizes that systematic generalization is fundamentally at odds with the tendency of deep learning models to share sub-modules. It provides both a theoretical description and derivation of the issue as well as supporting experiments with several deep learning architectures.
Strengths And Weaknesses
Strengths: the hypothesis advanced by this paper is new and interesting; it deserves to be developed further.
Weaknesses:
The paper is not very well written; see below for some examples where the writing should be improved.
The propositions and theorem are trivial.
The vision experiments seem quite unnatural, since images from different datasets are averaged (see Section 3.2). Also the text experiments are not particularly natural (they concatenate two unrelated sequences and ask to predict both labels).
A general question that might be worth addressing: the shape of the regions is not taken into account? For instance, to prove Proposition 1 (using Assumption 2) you can simply merge regions, but it might be unnatural to merge them, given the geometry.
Examples of unclear parts in Section 2.1:
"
Y
contains
K
factors
Y
1
, . . . ,
Y
K
" is not clear. Does it mean that
Y
=
Y
1
×
⋯
×
Y
K
? But this would be at odds with the (unclear) statement "which can be entangled".
Is
X
train
the (ordered) sequence of inputs or is it a set that contains all the input data? (same for
Y
train
and test)
"The values for each factor
i
are included in the training output" is also unclear
"A model
f
maps input
X
to the prediction of output
f
(
X
)
": so is
X
the set of all inputs or a single input?
Examples of unclear parts in Section 2.2:
"deep learning more or equally prefers
f
over
g
" -> "deep learning prefers
f
over
g
more or equally" (a bit more clear in my opinion)
Examples of unclear parts in Section 3.1:
"
Y
1
is chosen from all possible labels": what does this mean? That
Y
1
is the set of all labels?
The two datasets share the same input space
X
?
Clarity, Quality, Novelty And Reproducibility
Clarity: the general idea of the paper is easy to understand but the details are not explained in a very clear way. Novelty: as far as I know, the hypothesis introduced in this paper is new. |
ICLR | Title
Certified Defenses against Adversarial Examples
Abstract
While neural networks have achieved high accuracy on standard image classification benchmarks, their accuracy drops to nearly zero in the presence of small adversarial perturbations to test inputs. Defenses based on regularization and adversarial training have been proposed, but often followed by new, stronger attacks that defeat these defenses. Can we somehow end this arms race? In this work, we study this problem for neural networks with one hidden layer. We first propose a method based on a semidefinite relaxation that outputs a certificate that for a given network and test input, no attack can force the error to exceed a certain value. Second, as this certificate is differentiable, we jointly optimize it with the network parameters, providing an adaptive regularizer that encourages robustness against all attacks. On MNIST, our approach produces a network and a certificate that no attack that perturbs each pixel by at most = 0.1 can cause more than 35% test error.
1 INTRODUCTION
Despite the impressive (and sometimes even superhuman) accuracies of machine learning on diverse tasks such as object recognition (He et al., 2015), speech recognition (Xiong et al., 2016), and playing Go (Silver et al., 2016), classifiers still fail catastrophically in the presence of small imperceptible but adversarial perturbations (Szegedy et al., 2014; Goodfellow et al., 2015; Kurakin et al., 2016). In addition to being an intriguing phenonemon, the existence of such “adversarial examples” exposes a serious vulnerability in current ML systems (Evtimov et al., 2017; Sharif et al., 2016; Carlini et al., 2016). While formally defining an “imperceptible” perturbation is difficult, a commonly-used proxy is perturbations that are bounded in `∞-norm (Goodfellow et al., 2015; Madry et al., 2017; Tramèr et al., 2017); we focus on this attack model in this paper, as even for this proxy it is not known how to construct high-performing image classifiers that are robust to perturbations.
While a proposed defense (classifier) is often empirically shown to be successful against the set of attacks known at the time, new stronger attacks are subsequently discovered that render the defense useless. For example, defensive distillation (Papernot et al., 2016c) and adversarial training against the Fast Gradient Sign Method (Goodfellow et al., 2015) were two defenses that were later shown to be ineffective against stronger attacks (Carlini & Wagner, 2016; Tramèr et al., 2017). In order to break this arms race between attackers and defenders, we need to come up with defenses that are successful against all attacks within a certain class.
However, even computing the worst-case error for a given network against all adversarial perturbations in an `∞-ball is computationally intractable. One common approximation is to replace the worst-case loss with the loss from a given heuristic attack strategy, such as the Fast Gradient Sign Method (Goodfellow et al., 2015) or more powerful iterative methods (Carlini & Wagner, 2017a; Madry et al., 2017). Adversarial training minimizes the loss with respect to these heuristics. However, this essentially minimizes a lower bound on the worst-case loss, which is problematic since points where the bound is loose have disproportionately lower objective values, which could lure and mislead an optimizer. Indeed, while adversarial training often provides robustness against a specific attack, it often fails to generalize to new attacks, as described above. Another approach is to compute the worst-case perturbation exactly using discrete optimization (Katz et al., 2017a; Carlini
et al., 2017). Currently, these approaches can take up to several hours or longer to compute the loss for a single example even for small networks with a few hundred hidden units. Training a network would require performing this computation in the inner loop, which is infeasible.
In this paper, we introduce an approach that avoids both the inaccuracy of lower bounds and the intractability of exact computation, by computing an upper bound on the worst-case loss for neural networks with one hidden layer, based on a semidefinite relaxation that can be computed efficiently. This upper bound serves as a certificate of robustness against all attacks for a given network and input. Minimizing an upper bound is safer than minimizing a lower bound, because points where the bound is loose have disproportionately higher objective values, which the optimizer will tend to avoid. Furthermore, our certificate of robustness, by virtue of being differentiable, is trainable—it can be optimized at training time jointly with the network, acting as a regularizer that encourages robustness against all `∞ attacks.
In summary, we are the first (along with the concurrent work of Kolter & Wong (2017)) to demonstrate a certifiable, trainable, and scalable method for defending against adversarial examples on two-layer networks. We train a network on MNIST whose test error on clean data is 4.2%, and which comes with a certificate that no attack can misclassify more than 35% of the test examples using `∞ perturbations of size = 0.1.
Notation. For a vector z ∈ Rn, we use zi to denote the ith coordinate of z. For a matrix Z ∈ Rm×n, Zi denotes the ith row. For any activation function σ : R → R (e.g., sigmoid, ReLU) and a vector z ∈ Rn, σ(z) is a vector in Rn with σ(z)i = σ(zi) (non-linearity is applied element-wise). We use B (z) to denote the `∞ ball of radius around z ∈ Rd: B (z) = {z̃ | |z̃i − zi| ≤ for i = 1, 2, . . . d}. Finally, we denote the vector of all zeros by 0 and the vector of all ones by 1.
2 SETUP
Score-based classifiers. Our goal is to learn a mapping C : X → Y , where X = Rd is the input space (e.g., images) and Y = {1, . . . , k} is the set of k class labels (e.g., object categories). Assume C is driven by a scoring function f i : X → R for all classes i ∈ Y , where the classifier chooses the class with the highest score: C(x) = argmaxi∈Y f i(x). Also, define the pairwise margin f ij(x) def = f i(x) − f j(x) for every pair of classes (i, j). Note that the classifier outputs C(x) = i iff f ij(x) > 0 for all alternative classes j 6= i. Normally, a classifier is evaluated on the 0-1 loss `(x, y) = I[C(x) 6= y]. This paper focuses on linear classifiers and neural networks with one hidden layer. For linear classifiers, f i(x) def= W>i x, where Wi is the i th row of the parameter matrix W ∈ Rk×d.
For neural networks with one hidden layer consisting of m hidden units, the scoring function is f i(x) = V >i σ(Wx), where W ∈ Rm×d and V ∈ Rk×m are the parameters of the first and second layer, respectively, and σ is a non-linear activation function applied elementwise (e.g., for ReLUs, σ(z) = max(z, 0)). We will assume below that the gradients of σ are bounded: σ′(z) ∈ [0, 1] for all z ∈ R; this is true for ReLUs, as well as for sigmoids (with the stronger bound σ′(z) ∈ [0, 14 ]). Attack model. We are interested in classification in the presence of an attacker A : X → X that takes a (test) input x and returns a perturbation x̃. We consider attackers A that can perturb each feature xi by at most ≥ 0; formally, A(x) is required to lie in the `∞ ball B (x) def = {x̃ | ‖x̃− x‖∞ ≤ }, which is the standard constraint first proposed in Szegedy et al. (2014). Define the adversarial loss with respect to A as `A(x, y) = I[C(A(x)) 6= y]. We assume the white-box setting, where the attacker A has full knowledge of C. The optimal (untargeted) attack chooses the input that maximizes the pairwise margin of an incorrect class i over the correct class y: Aopt(x) = argmaxx̃∈B (x) maxi f
iy(x̃). For a neural network, computing Aopt is a non-convex optimization problem; heuristics are typically employed, such as the Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2015), which perturbs x based on the gradient, or the Carlini-Wagner attack, which performs iterative optimization (Carlini & Wagner, 2017b).
3 CERTIFICATE ON THE ADVERSARIAL LOSS
For ease of exposition, we first consider binary classification with classes Y = {1, 2}; the multiclass extension is discussed at the end of Section 3.3. Without loss of generality, assume the correct label for x is y = 2. Simplifying notation, let f(x) = f1(x) − f2(x) be the margin of the incorrect class over the correct class. Then Aopt(x) = argmaxx̃∈B (x) f(x̃) is the optimal attack, which is successful if f(Aopt(x)) > 0. Since f(Aopt(x)) is intractable to compute, we will try to upper bound it via a tractable relaxation.
In the rest of this section, we first review a classic result in the simple case of linear networks where a tight upper bound is based on the `1-norm of the weights (Section 3.1). We then extend this to general classifiers, in which f(Aopt(x)) can be upper bounded using the maximum `1-norm of the gradient at any point x̃ ∈ B (x) (Section 3.2). For two-layer networks, this quantity is upper bounded by the optimal value fQP(x) of a non-convex quadratic program (QP) (Section 3.3), which in turn is upper bounded by the optimal value fSDP(x) of a semidefinite program (SDP). The SDP is convex and can be computed exactly (which is important for obtainining actual certificates). To summarize, we have the following chain of inequalities:
f(A(x)) ≤ f(Aopt(x)) (3.2)
≤ f(x) + max x̃∈B (x)
‖∇f(x̃)‖1 (3.3) ≤ fQP(x) (3.3) ≤ fSDP(x), (1)
which implies that the adversarial loss `A(x) = I[f(A(x)) > 0] with respect to any attacker A is upper bounded by I[fSDP(x) > 0]. Note that for certain non-linearities such as ReLUs, ∇f(x̃) does not exist everywhere, but our analysis below holds as long as f is differentiable almost-everywhere.
3.1 LINEAR CLASSIFIERS
For (binary) linear classifiers, we have f(x) = (W1 −W2)>x, where W1,W2 ∈ Rd are the weight vectors for the two classes. For any input x̃ ∈ B (x), Hölder’s inequality with ‖x− x̃‖∞ ≤ gives:
f(x̃) = f(x) + (W1 −W2)>(x̃− x) ≤ f(x) + ‖W1 −W2‖1. (2)
Note that this bound is tight, obtained by taking Aopt(x)i = xi + sign(W1i −W2i).
3.2 GENERAL CLASSIFIERS
For more general classifiers, we cannot compute f(Aopt(x)) exactly, but motivated by the above, we can use the gradient to obtain a linear approximation g:
f(x̃) ≈ g(x̃) def= f(x) +∇f(x)> ( x̃− x ) ≤ f(x) + ‖∇f(x)‖1. (3)
Using this linear approximation to generate A(x) corresponds exactly to the Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2015). However, f is only close to g when x̃ is very close to x, and people have observed the gradient masking phenomenon (Tramèr et al., 2017; Papernot et al., 2016b) in several proposed defenses that train against approximations like g, such as saturating networks (Nayebi & Ganguli, 2017), distillation (Papernot et al., 2016c), and adversarial training (Goodfellow et al., 2015). Specifically, defenses that try to minimize ‖∇f(x)‖1 locally at the training points result in loss surfaces that exhibit sharp curvature near those points, essentially rendering the linear approximation g(x̃) meaningless. Some attacks (Carlini & Wagner, 2016; Tramèr et al., 2017) evade these defenses and witness a large f(Aopt(x)). Figure 1a provides a simple illustration.
We propose an alternative approach: use integration to obtain an exact expression for f(x̃) in terms of the gradients along the line between x and x̃:
f(x̃) = f(x) + ∫ 1 0 ∇f ( tx̃+ (1− t)x )>( x̃− x ) dt
≤ f(x) + max x̃∈B (x) ‖∇f(x̃)‖1, (4)
where the inequality follows from the fact that tx̃ + (1 − t)x ∈ B (x) for all t ∈ [0, 1]. The key difference between (4) and (3) is that we consider the gradients over the entire ballB (x) rather than only at x (Figure 1b). However, computing the RHS of (4) is intractable in general. For two-layer neural networks, this optimization has additional structure which we will exploit in the next section.
3.3 TWO-LAYER NEURAL NETWORKS
We now unpack the upper bound (4) for two-layer neural networks. Recall from Section 2 that f(x) = f1(x) − f2(x) = v>σ(Wx), where v def= V1 − V2 ∈ Rm is the difference in second-layer weights for the two classes. Let us try to bound the norm of the gradient ‖∇f(x̃)‖1 for x̃ ∈ B (x). If we apply the chain rule, we see that the only dependence on x̃ is σ′(Wx̃), the activation derivatives. We now leverage our assumption that σ′(z) ∈ [0, 1]m for all vectors z ∈ Rm, so that we can optimize over possible activation derivatives s ∈ [0, 1]m directly independent of x (note that there is potential looseness because not all such s need be obtainable via some x̃ ∈ B (x)). Therefore:
‖∇f(x̃)‖1 (i) = ‖W> diag(v)σ′(Wx̃)‖1 (ii)
≤ max s∈[0,1]m ‖W> diag(v)s‖1
(iii) = max s∈[0,1]m,t∈[−1,1]d t>W> diag(v)s, (5)
where (i) follows from the chain rule, (ii) uses the fact that σ has bounded derivatives σ′(z) ∈ [0, 1], and (iii) follows from the identity ‖z‖1 = maxt∈[−1,1]d t>z. (Note that for sigmoid networks, where σ′(z) ∈ [0, 14 ], we can strengthen the above bound by a corresponding factor of 1 4 .) Substituting the bound (5) into (4), we obtain an upper bound on the adversarial loss that we call fQP:
f(Aopt(x)) ≤ f(x) + max x̃∈B (x) ‖∇f(x̃)‖1
≤ f(x) + max s∈[0,1]m,t∈[−1,1]d
t>W> diag(v)s def = fQP(x). (6)
Unfortunately, (6) still involves a non-convex optimization problem (sinceW> diag(v) is not necessarily negative semidefinite). In fact, it is similar to the NP-hard MAXCUT problem, which requires maximizing x>Lx over x ∈ [−1, 1]d for a graph with Laplacian matrix L. While MAXCUT is NP-hard, it can be efficiently approximated, as shown by the celebrated semidefinite programming relaxation for MAXCUT in Goemans & Williamson (1995). We follow a similar approach here to obtain an upper bound on fQP(x).
First, to make our variables lie in [−1, 1]m instead of [0, 1]m, we reparametrize s to produce:
max s∈[−1,1]m,t∈[−1,1]d
1 2 t>W> diag(v)(1+ s). (7)
Next pack the variables into a vector y ∈ Rm+d+1 and the parameters into a matrix M :
y def = [ 1 t s ] M(v,W ) def = 0 0 1>W> diag(v)0 0 W> diag(v) diag(v)>W1 diag(v)>W 0 . (8) In terms of these new objects, our objective takes the form:
max y∈[−1,1](m+d+1)
1 4 y>M(v,W )y = max y∈[−1,1](m+d+1) 1 4 〈M(v,W ), yy>〉. (9)
Note that every valid vector y ∈ [−1,+1]m+d+1 satisfies the constraints yy> 0 and (yy>)jj = 1. Defining P = yy>, we obtain the following convex semidefinite relaxation of our problem:
fQP(x) ≤ fSDP(x) def = f(x) +
4 max P 0,diag(P )≤1 〈M(v,W ), P 〉 . (10)
Note that the optimization of the semidefinite program depends only on the weights v and W and does not depend on the inputs x, so it only needs to be computed once for a model (v,W ).
Semidefinite programs can be solved with off-the-shelf optimizers, although these optimizers are somewhat slow on large instances. In Section 4 we propose a fast stochastic method for training, which only requires computing the top eigenvalue of a matrix.
Generalization to multiple classes. The preceding arguments all generalize to the pairwise margins f ij , to give:
f ij(A(x)) ≤ f ijSDP(x) def = f ij(x) +
4 max P 0,diag(P )≤1
〈 M ij(V,W ), P 〉 , where (11)
M ij(V,W ) is defined as in (9) with v = Vi − Vj . The adversarial loss of any attacker, `A(x, y) = I[maxi 6=y f iy(A(x)) > 0], can be bounded using the fact that f iySDP(x) ≥ f iy(A(x)). In particular,
`A(x, y) = 0 if maxi 6=y f iy SDP(x) < 0. (12)
4 TRAINING THE CERTIFICATE
In the previous section, we proposed an upper bound (12) on the loss `A(x, y) of any attack A, based on the bound (11). Normal training with some classification loss `cls(V,W ;xn, yn) like hinge loss or cross-entropy will encourage the pairwise margin f ij(x) to be large in magnitude, but won’t necessarily cause the second term in (11) involvingM ij to be small. A natural strategy is thus to use the following regularized objective given training examples (xn, yn), which pushes down on both terms:
(W ?, V ?) = argmin W,V ∑ n `cls(V,W ;xn, yn) + ∑ i 6=j λij max P 0,diag(P )≤1 〈 M ij(V,W ), P 〉 , (13)
where λij > 0 are the regularization hyperparameters. However, computing the gradients of the above objective involves finding the optimal solution of a semidefinite program, which is slow.
Duality to the rescue. Our computational burden is lifted by the beautiful theory of duality, which provides the following equivalence between the primal maximization problem over P , and a dual minimization problem over new variables c (see Section A for details):
max P 0,diag(P )≤1
〈 M ij(V,W ), P 〉 = min cij∈RD D · λ+max ( M ij(V,W )− diag(cij) ) + 1>max(c, 0),
(14) where D = (d+m+ 1) and λ+max(B) is the maximum eigenvalue of B, or 0 if all eigenvalues are negative. This dual formulation allows us to introduce additional dual variables cij ∈ RD that are optimized at the same time as the parameters V and W , resulting in an objective that can be trained efficiently using stochastic gradient methods.
The final objective. Using (14), we end up optimizing the following training objective:
(W ?, V ?, c?) = argmin W,V,c ∑ n `cls(V,W ;xn, yn) + ∑ i 6=j λij · [ D · λ+max(M ij(V,W )− diag(cij)) + 1>max(cij , 0) ] .
(15)
The objective in (15) can be optimized efficiently. The most expensive operation is λ+max, which requires computing the maximum eigenvector of the matrix M ij − diag(cij) in order to take gradients. This can be done efficiently using standard implementations of iterative methods like Lanczos. Further implementation details (including tuning of λij) are presented in Section 6.3.
Dual certificate of robustness. The dual formulation is also useful because any value of the dual is an upper bound on the optimal value of the primal. Specifically, if (W [t], V [t], c[t]) are the parameters at iteration t of training, then
f ij(A(x)) ≤ f(x) + 4
[ D · λ+max ( M ij(V [t],W [t])− diag(c[t]ij) ) + 1>max(c[t]ij , 0) ] , (16)
for any attack A. As we train the network, we obtain a quick upper bound on the worst-case adversarial loss directly from the regularization loss, without having to optimize an SDP each time.
5 OTHER UPPER BOUNDS
In Section 3, we described a function f ijSDP that yields an efficient upper bound on the adversarial loss, which we obtained using convex relaxations. One could consider other simple ways to upper bound the loss; we describe here two common ones based on the spectral and Frobenius norms.
Spectral bound: Note that v>(σ(Wx̃) − σ(Wx)) ≤ ‖v‖2‖σ(Wx̃) − σ(Wx)‖2 by CauchySchwarz. Moreover, since σ is contractive, ‖σ(Wx̃) − σ(Wx)‖2 ≤ ‖W (x̃ − x)‖2 ≤ ‖W‖2‖x̃ − x‖2 ≤ √ d‖W‖2, where ‖W‖2 is the spectral norm (maximum singular value) of W . This yields the following upper bound that we denote by fspectral:
f ij(A(x)) ≤ f ijspectral(x) def = f ij(x) + √ d‖W‖2‖Vi − Vj‖2. (17)
This measure of vulnerability to adversarial examples based on the spectral norms of the weights of each layer is considered in Szegedy et al. (2014) and Cisse et al. (2017).
Frobenius bound: For ease in training, often the Frobenius norm is regularized (weight decay) instead of the spectral norm. Since ‖W‖F ≥ ‖W‖2, we get a corresponding upper bound ffrobenius:
f ij(A(x)) ≤ f ijfrobenius(x) = f ij(x) + √ d‖W‖F‖Vi − Vj‖2. (18)
In Section 6, we empirically compare our proposed bound using f ijSDP to these two upper bounds.
6 EXPERIMENTS
We evaluated our method on the MNIST dataset of handwritten digits, where the task is to classify images into one of ten classes. Our results can be summarized as follows: First, in Section 6.1, we show that our certificates of robustness are tighter than those based on simpler methods such as Frobenius and spectral bounds (Section 5), but our bounds are still too high to be meaningful for general networks. Then in Section 6.2, we show that by training on the certificates, we obtain networks with much better bounds and hence meaningful robustness. This reflects an important point: while accurately analyzing the robustness of an arbitrary network is hard, training the certificate jointly leads to a network that is robust and certifiably so. In Section 6.3, we present implementation details, design choices, and empirical observations that we made while implementing our method.
Networks. In this work, we focus on two layer networks. In all our experiments, we used neural networks with m = 500 hidden units, and TensorFlow’s implementation of Adam (Kingma & Ba, 2014) as the optimizer; we considered networks with more hidden units, but these did not substantially improve accuracy. We experimented with both the multiclass hinge loss and cross-entropy. All hyperparameters (including the choice of loss function) were tuned based on the error of the Projected Gradient Descent (PGD) attack (Madry et al., 2017) at = 0.1; we report the hyperparameter settings below. We considered the following training objectives providing 5 different networks:
1. Normal training (NT-NN). Cross-entropy loss and no explicit regularization. 2. Frobenius norm regularization (Fro-NN). Hinge loss and a regularizer λ(‖W‖F + ‖v‖2)
with λ = 0.08.
3. Spectral norm regularization (Spe-NN). Hinge loss and a regularizer λ(‖W‖2 + ‖v‖2) with λ = 0.09.
4. Adversarial training (AT-NN). Cross-entropy with the adversarial loss against PGD as a regularizer, with the regularization parameter set to 0.5. We found that this regularized loss works better than optimizing only the adversarial loss, which is the defense proposed in Madry et al. (2017). We set the step size of the PGD adversary to 0.1, number of iterations to 40, and perturbation size to 0.3.
5. Proposed training objective (SDP-NN). Dual SDP objective described in Equation 15 of Section 4. Implementation details and hyperparameter values are detailed in Section 6.3.
Evaluating upper bounds. Below we will consider various upper bounds on the adversarial loss `Aopt (based on our method, as well as the Frobenius and spectral bounds described in Section 5). Ideally we would compare these to the ground-truth adversarial loss `Aopt , but computing this exactly is difficult. Therefore, we compare upper bounds on the adversarial loss with a lower bound on `Aopt instead. The loss of any attack provides a valid lower bound and we consider the strong Projected Gradient Descent (PGD) attack run against the cross-entropy loss, starting from a random point in B (x), with 5 random restarts. We observed that PGD against hinge loss did not work well, so we used cross-entropy even for attacking networks trained with the hinge loss.
6.1 QUALITY OF THE UPPER BOUND
For each of the five networks described above, we computed upper bounds on the 0-1 loss based on our certificate (which we refer to as the “SDP bound” in this section), as well as the Frobenius and spectral bounds described in Section 5. While Section 4 provides a procedure for efficiently obtaining an SDP bound as a result of training, for networks not trained with our method we need to solve an SDP at the end of training to obtain certificates. Fortunately, this only needs to be done once for every pair of classes. In our experiments, we use the modeling toolbox YALMIP (Löfberg, 2004) with Sedumi (Sturm, 1999) as a backend to solve the SDPs, using the dual form (14); this took roughly 10 minutes per SDP (around 8 hours in total for a given model).
In Figure 2, we display average values of the different upper bounds over the 10, 000 test examples, as well as the corresponding lower bound from PGD. We find that our bound is tighter than the Frobenius and spectral bounds for all the networks considered, but its tightness relative to the PGD lower bound varies across the networks. For instance, our bound is relatively tight on Fro-NN, but unfortunately Fro-NN is not very robust against adversarial examples (the PGD attack exhibits large
error). In contrast, the adversarially trained network AT-NN does appear to be robust to attacks, but our certificate, despite being much tighter than the Frobenius and spectral bounds, is far away from the PGD lower bound. The only network that is both robust and has relatively tight upper bounds is SDP-NN, which was explicitly trained to be both robust and certifiable as described in Section 4; we examine this network and the effects of training in more detail in the next subsection.
6.2 EVALUATING PROPOSED TRAINING OBJECTIVE.
In the previous section, we saw that the SDP bound, while being tighter than simpler upper bounds, could still be quite loose on arbitrary networks. However, optimizing against the SDP certificate seemed to make the certificate tighter. In this section, we explore the effect of different optimization objectives in more detail. First, we plot on a single axis the best upper bound (i.e., the SDP bound) and the lower bound (from PGD) on the adversarial loss obtained with each of the five training objectives discussed above. This is given in Figure 3a.
Neither spectral nor Frobenius norm regularization seems to be helpful for encouraging adversarial robustness—the actual performance of those networks against the PGD attack is worse than the upper bound for SDP-NN against all attacks. This shows that the SDP certificate actually provides a useful training objective for encouraging robustness compared to other regularizers.
Separately, we can ask whether SDP-NN is robust to actual attacks. We explore the robustness of our network in Figure 3b, where we plot the performance of SDP-NN against 3 attacks—the PGD attack from before, the Carlini-Wagner attack (Carlini & Wagner, 2017b) (another strong attack), and the weaker Fast Gradient Sign Method (FGSM) baseline. We see substantial robustness against all 3 attacks, even though our method was not explicitly trained with any of them in mind.
Next, we compare to other bounds reported in the literature. A rough ceiling is given by the network of Madry et al. (2017), which is a relatively large four-layer convolutional network adversarially trained against PGD. While this network has no accompanying certificate of robustness, it was evaluated against a number of attack strategies and had worst-case error 11% at = 0.3. Another set of numbers comes from Carlini et al. (2017), who use formal verification methods to compute Aopt exactly on 10 input examples for a small (72-node) variant of the Madry et al. network. The authors reported to us that this network misclassifies 6 out of 10 examples at = 0.05 (we note that 4 out of 10 of these were misclassified to start with, but 3 of the 4 can also be flipped to a different wrong class with some < 0.07).
At the value = 0.1 for which it was tuned, SDP-NN has error 16% against the PGD attack, and an upper bound of 35% error against any attack. This is substantially better than the small 72-node network, but also much worse than the full Madry et al. network. How much of the latter looseness comes from conservatism in our method, versus the fact that our network has only two layers? We can get some idea by considering the AT-NN network, which was trained similarly to Madry et al., but uses the same architecture as SDP-NN. From Figure 3a, we see that the error of SDP-NN against PGD (16%) is not much worse than that of AT-NN (11%), even though AT-NN was explicitly trained against the PGD attack. This suggests that most of the gap comes from the smaller network depth,
rather than from conservatism in the SDP bound. We are currently in the process of extending our approach to deeper networks, and optimistic about obtaining improved bounds with such networks.
Finally, we compare with the approach proposed in Kolter & Wong (2017) whose work appeared shortly after an initial version of our paper. They provide an upper bound on the adversarial loss using linear programs (LP) followed by a method to efficiently train networks to minimize this upper bound. In order to compare with SDP-NN, the authors provided us with a network with the same architecture as SDP-NN, but trained using their LP based objective. We call this network LP-NN. Table 1 shows that LP-NN and SDP-NN are comparable in terms of their robustness against PGD, and the robustness guarantees that they come with.
Interestingly, the SDP and LP approaches provide vacuous bounds for networks not trained to minimize the respective upper bounds (though these networks are indeed robust). This suggests that these two approaches are comparable, but complementary. Finally, we note that in contrast to this work, the approach of Kolter & Wong (2017) extends to deeper networks, which allows them to train a four-layer CNN with a provable upper bound on adversarial error of 8.4% error.
6.3 IMPLEMENTATION DETAILS
We implemented our training objective in TensorFlow, and implemented λ+max as a custom operator using SciPy’s implementation of the Lanczos algorithm for fast top eigenvector computation; occasionally Lanczos fails to converge due to a small eigen-gap, in which case we back off to a full SVD. We used hinge loss as the classification loss, and decayed the learning rate in steps from 10−3 to 10−5, decreasing by a factor of 10 every 30 epochs. Each gradient step involves computing top eigenvectors for 45 different matrices, one for each pair of classes (i, j). In order to speed up computation, for each update, we randomly pick it and only compute gradients for pairs (it, j), j 6= it, requiring only 9 top eigenvector computations in each step.
For the regularization parameters λij , the simplest idea is to set them all equal to the same value; this leads to the unweighted regularization scheme where λij = λ for all pairs (i, j). We tuned λ to 0.05, which led to reasonably good bounds. However, we observed that certain pairs of classes tended to have larger margins f ij(x) than other classes, which meant that certain label pairs appeared in the maximum of (12) much more often. That led us to consider a weighted regularization scheme with λij = wijλ, where wij is the fraction of training points for which the the label i (or j) appears as the maximizing term in (12). We updated the values of these weights every 20 epochs. Figure 4a compares the PGD lower bound and SDP upper bound for the unweighted and weighted networks. The weighted network is better than the unweighted network for both the lower and upper bounds.
Finally, we saw in Equation 16 of Section 4 that the dual variables cij provide a quick-to-compute certificate of robustness. Figure 4b shows that the certificates provided by these dual variables are very close to what we would obtain by fully optimizing the semidefinite programs. These dual certificates made it easy to track robustness across epochs of training and to tune hyperparameters.
7 DISCUSSION
In this work, we proposed a method for producing certificates of robustness for neural networks, and for training against these certificates to obtain networks that are provably robust against adversaries.
Related work. In parallel and independent work, Kolter & Wong (2017) also provide provably robust networks against `∞ perturbations by using convex relaxations. While our approach uses a single semidefinite program to compute an upper bound on the adversarial loss, Kolter & Wong
(2017) use separate linear programs for every data point, and apply their method to networks of depth up to four. In theory, neither bound is strictly tighter than the other, and our experiments (Table 1) suggest that the two bounds are complementary. Combining the approaches seems to be a promising future direction.
Katz et al. (2017a) and the follow-up Carlini et al. (2017) also provide certificates of robustness for neural networks against `∞ perturbations. That work uses SMT solvers, which are a tool from the formal verification literature. The SMT solver can answer the binary question “Is there an adversarial example within distance of the input x?”, and is correct whenever it terminates. The main drawback of SMT and similar formal verification methods is that they are slow—they have worst-case exponential-time scaling in the size of the network; moreover, to use them during training would require a separate search for each gradient step. Huang et al. (2017) use SMT solvers and are able to analyze state-of-the-art networks on MNIST, but they make various approximations such that their numbers are not true upper bounds.
Bastani et al. (2016) provide tractable certificates but require to be small enough to ensure that the entire `∞ ball around an input lies within the same linear region. For the networks and values of that we consider in our paper, we found that this condition did not hold. Recently, Hein & Andriushchenko (2017) proposed a bound for guaranteeing robustness to `p-norm perturbations, based on the maximum pp−1 -norm of the gradient in the -ball around the inputs. Hein & Andriushchenko (2017) show how to efficiently compute this bound for p = 2, as opposed to our work which focuses on `∞ and requires different techniques to achieve scalability.
Madry et al. (2017) perform adversarial training against PGD on the MNIST and CIFAR-10 datasets, obtaining networks that they suggest are “secure against first-order adversaries”. However, this is based on an empirical observation that PGD is nearly-optimal among gradient-based attacks, and does not correspond to any formal robustness guarantee.
Finally, the notion of a certificate appears in the theory of convex optimization, but means something different in that context; specifically, it corresponds to a proof that a point is near the optimum of a convex function, whereas here our certificates provide upper bounds on non-convex functions. Additionally, while robust optimization (Bertsimas et al., 2011) provides a tool for optimizing objectives with robustness constraints, applying it directly would involve the same intractable optimization for Aopt that we deal with here.
Other approaches to verification. While they have not been explored in the context of neural networks, there are approaches in the control theory literature for verifying robustness of dynamical systems, based on Lyapunov functions (Lyapunov, 1892; 1992). We can think of the activations in a neural network as the evolution of a time-varying dynamical system, and attempt to prove stability around a trajectory of this system (Tedrake et al., 2010; Tobenkin et al., 2011). Such methods typically use sum-of-squares verification (Papachristodoulou & Prajna, 2002; 2005; Parrilo, 2003) and are restricted to relatively low-dimensional dynamical systems, but could plausibly scale to
larger settings. Another approach is to construct families of networks that are provably robust a priori, which would remove the need to verify robustness of the learned model; to our knowledge this has not been done for any expressive model families.
Adversarial examples and secure ML. There has been a great deal of recent work on the security of ML systems; we provide only a sampling here, and refer the reader to Barreno et al. (2010), Biggio et al. (2014a), Papernot et al. (2016b), and Gardiner & Nagaraja (2016) for some recent surveys.
Adversarial examples for neural networks were first discovered by Szegedy et al. (2014), and since then a number of attacks and defenses have been proposed. We have already discussed gradientbased methods as well as defenses based on adversarial training. There are also other attacks based on, e.g., saliency maps (Papernot et al., 2016a), KL divergence (Miyato et al., 2015), and elastic net optimization (Chen et al., 2017); many of these attacks are collated in the cleverhans repository (Goodfellow et al., 2016). For defense, rather than making networks robust to adversaries, some work has focused on simply detecting adversarial examples. However, Carlini & Wagner (2017a) recently showed that essentially all known detection methods can be subverted by strong attacks.
As explained in Barreno et al. (2010), there are a number of different attack models beyond the testtime attacks considered here, based on different attacker goals and capabilities. For instance, one can consider data poisoning attacks, where an attacker modifies the training set in an effort to affect test-time performance. Newsome et al. (2006), Laskov & Šrndic̀ (2014), and Biggio et al. (2014b) have demonstrated poisoning attacks against real-world systems.
Other types of certificates. Certificates of performance for machine learning systems are desirable in a number of settings. This includes verifying safety properties of air traffic control systems (Katz et al., 2017a;b) and self-driving cars (O’Kelly et al., 2016; 2017), as well as security applications such as robustness to training time attacks (Steinhardt et al., 2017). More broadly, certificates of performance are likely necessary for deploying machine learning systems in critical infrastructure such as internet packet routing (Winstein & Balakrishnan, 2013; Sivaraman et al., 2014). In robotics, certificates of stability are routinely used both for safety verification (Lygeros et al., 1999; Mitchell et al., 2005) and controller synthesis (Başar & Bernhard, 2008; Tedrake et al., 2010).
In traditional verification work, Rice’s theorem (Rice, 1953) is a strong impossibility result essentially stating that most properties of most programs are undecidable. Similarly, we should expect that verifying robustness for arbitrary neural networks is hard. However, the results in this work suggest that it is possible to learn neural networks that are amenable to verification, in the same way that it is possible to write programs that can be formally verified. Optimistically, given expressive enough certification methods and model families, as well as strong enough specifications of robustness, one could even hope to train vector representations of natural images with strong robustness properties, thus finally closing the chapter on adversarial vulnerabilities in the visual domain.
Reproducibility. All code, data and experiments for this paper are available on the Codalab platform at https://worksheets.codalab.org/worksheets/ 0xa21e794020bb474d8804ec7bc0543f52/.
Acknowledgements. This work was partially supported by a Future of Life Institute Research Award and Open Philanthrophy Project Award. JS was supported by a Fannie & John Hertz Foundation Fellowship and an NSF Graduate Research Fellowship. We are also grateful to Guy Katz, Zico Kolter and Eric Wong for providing relevant experimental results for comparison, as well as to the anonymous reviewers for useful feedback and references.
A DUALITY
In this section we justify the duality relation (14). Recall that the primal program is
maximize 〈M,P 〉 (19) subject to P 0,diag(P ) ≤ 1.
Rather than taking the dual directly, we first add the redundant constraint tr(P ) ≤ d+m+ 1 (it is redundant because the SDP is in d+m+ 1 dimensions and diag(P ) ≤ 1). This yields
maximize 〈M,P 〉 (20) subject to P 0,diag(P ) ≤ 1, tr(P ) ≤ d+m+ 1.
We now form the Lagrangian for the constraints diag(P ) ≤ 1, leaving the other two constraints as-is. This yields the equivalent optimization problem
maximize min c≥0 〈M,P 〉+ c>(1− diag(P )) (21)
subject to P 0, tr(P ) ≤ d+m+ 1.
Now, we apply minimax duality to swap the order of min and max; the value of (21) is thus equal to
minimize max P 0,
tr(P )≤d+m+1
〈M,P 〉+ c>(1− diag(P )) (22)
subject to c ≥ 0.
The inner maximum can be simplified as 1>c+(d+m+1)· (
max P 0,tr(P )≤1
〈M−diag(c), P 〉 ) = 1>c+(d+m+1)λ+max(M−diag(c)). (23)
Therefore, (22) simplifies to
minimize 1>c+ (d+m+ 1)λ+max(M − diag(c)) (24) subject to c ≥ 0.
This is almost the form given in (14), except that c is constrained to be non-negative and we have 1>c instead of 1>max(c, 0). However, note that for the λ+max term, it is always better for c to be larger; therefore, replacing c with max(c, 0) means that the optimal value of c will always be nonnegative, thus allowing us to drop the c ≥ 0 constraint and optimize c in an unconstrained manner. This finally yields the claimed duality relation (14). | 1. What is the focus of the paper regarding robust classifiers?
2. What are the strengths of the proposed approach, particularly in differentiability and optimization?
3. What are the limitations of the current bound regarding the attack model and model types?
4. How does the reviewer assess the significance and novelty of the paper's contribution?
5. What are some potential future research directions mentioned by the reviewer? | Review | Review
This paper develops a new differentiable upper bound on the performance of classifier when the adversarial input in l_infinity is assumed to be applied.
While the attack model is quite general, the current bound is only valid for linear and NN with one hidden layer model, so the result is quite restrictive.
However the new bound is an "upper" bound of the worst-case performance which is very different from the conventional sampling based "lower" bounds. Therefore minimizing this upper bound together with a classification loss makes perfect sense and provides a theoretically sound approach to train a robust classifier.
This paper provides a gradient of this new upper bound with respect to model parameters so we can apply the usual first order optimization scheme to this joint optimization (loss + upper bound).
In conclusion, I recommend this paper to be accepted, since it presents a new and feasible direction of a principled approach to train a robust classifier, and the paper is clearly written and easy to follow.
There are possible future directions to be developed.
1. Apply the sum-of-squares (SOS) method.
The paper's SDP relaxation is the straightforward relaxation of Quadratic Program (QP), and in terms of SOS relaxation hierarchy, it is the first hierarchy. One can increase the complexity going beyond the first hierarchy, and this should provides a computationally more challenging but tighter upper bound.
The paper already mentions about this direction and it would be interesting to see the experimental results.
2. Develop a similar relaxation for deep neural networks.
The author already mentioned that they are pursuing this direction. While developing the result to the general deep neural networks might be hard, residual networks maybe fine thanks to its structure. |
ICLR | Title
Certified Defenses against Adversarial Examples
Abstract
While neural networks have achieved high accuracy on standard image classification benchmarks, their accuracy drops to nearly zero in the presence of small adversarial perturbations to test inputs. Defenses based on regularization and adversarial training have been proposed, but often followed by new, stronger attacks that defeat these defenses. Can we somehow end this arms race? In this work, we study this problem for neural networks with one hidden layer. We first propose a method based on a semidefinite relaxation that outputs a certificate that for a given network and test input, no attack can force the error to exceed a certain value. Second, as this certificate is differentiable, we jointly optimize it with the network parameters, providing an adaptive regularizer that encourages robustness against all attacks. On MNIST, our approach produces a network and a certificate that no attack that perturbs each pixel by at most = 0.1 can cause more than 35% test error.
1 INTRODUCTION
Despite the impressive (and sometimes even superhuman) accuracies of machine learning on diverse tasks such as object recognition (He et al., 2015), speech recognition (Xiong et al., 2016), and playing Go (Silver et al., 2016), classifiers still fail catastrophically in the presence of small imperceptible but adversarial perturbations (Szegedy et al., 2014; Goodfellow et al., 2015; Kurakin et al., 2016). In addition to being an intriguing phenonemon, the existence of such “adversarial examples” exposes a serious vulnerability in current ML systems (Evtimov et al., 2017; Sharif et al., 2016; Carlini et al., 2016). While formally defining an “imperceptible” perturbation is difficult, a commonly-used proxy is perturbations that are bounded in `∞-norm (Goodfellow et al., 2015; Madry et al., 2017; Tramèr et al., 2017); we focus on this attack model in this paper, as even for this proxy it is not known how to construct high-performing image classifiers that are robust to perturbations.
While a proposed defense (classifier) is often empirically shown to be successful against the set of attacks known at the time, new stronger attacks are subsequently discovered that render the defense useless. For example, defensive distillation (Papernot et al., 2016c) and adversarial training against the Fast Gradient Sign Method (Goodfellow et al., 2015) were two defenses that were later shown to be ineffective against stronger attacks (Carlini & Wagner, 2016; Tramèr et al., 2017). In order to break this arms race between attackers and defenders, we need to come up with defenses that are successful against all attacks within a certain class.
However, even computing the worst-case error for a given network against all adversarial perturbations in an `∞-ball is computationally intractable. One common approximation is to replace the worst-case loss with the loss from a given heuristic attack strategy, such as the Fast Gradient Sign Method (Goodfellow et al., 2015) or more powerful iterative methods (Carlini & Wagner, 2017a; Madry et al., 2017). Adversarial training minimizes the loss with respect to these heuristics. However, this essentially minimizes a lower bound on the worst-case loss, which is problematic since points where the bound is loose have disproportionately lower objective values, which could lure and mislead an optimizer. Indeed, while adversarial training often provides robustness against a specific attack, it often fails to generalize to new attacks, as described above. Another approach is to compute the worst-case perturbation exactly using discrete optimization (Katz et al., 2017a; Carlini
et al., 2017). Currently, these approaches can take up to several hours or longer to compute the loss for a single example even for small networks with a few hundred hidden units. Training a network would require performing this computation in the inner loop, which is infeasible.
In this paper, we introduce an approach that avoids both the inaccuracy of lower bounds and the intractability of exact computation, by computing an upper bound on the worst-case loss for neural networks with one hidden layer, based on a semidefinite relaxation that can be computed efficiently. This upper bound serves as a certificate of robustness against all attacks for a given network and input. Minimizing an upper bound is safer than minimizing a lower bound, because points where the bound is loose have disproportionately higher objective values, which the optimizer will tend to avoid. Furthermore, our certificate of robustness, by virtue of being differentiable, is trainable—it can be optimized at training time jointly with the network, acting as a regularizer that encourages robustness against all `∞ attacks.
In summary, we are the first (along with the concurrent work of Kolter & Wong (2017)) to demonstrate a certifiable, trainable, and scalable method for defending against adversarial examples on two-layer networks. We train a network on MNIST whose test error on clean data is 4.2%, and which comes with a certificate that no attack can misclassify more than 35% of the test examples using `∞ perturbations of size = 0.1.
Notation. For a vector z ∈ Rn, we use zi to denote the ith coordinate of z. For a matrix Z ∈ Rm×n, Zi denotes the ith row. For any activation function σ : R → R (e.g., sigmoid, ReLU) and a vector z ∈ Rn, σ(z) is a vector in Rn with σ(z)i = σ(zi) (non-linearity is applied element-wise). We use B (z) to denote the `∞ ball of radius around z ∈ Rd: B (z) = {z̃ | |z̃i − zi| ≤ for i = 1, 2, . . . d}. Finally, we denote the vector of all zeros by 0 and the vector of all ones by 1.
2 SETUP
Score-based classifiers. Our goal is to learn a mapping C : X → Y , where X = Rd is the input space (e.g., images) and Y = {1, . . . , k} is the set of k class labels (e.g., object categories). Assume C is driven by a scoring function f i : X → R for all classes i ∈ Y , where the classifier chooses the class with the highest score: C(x) = argmaxi∈Y f i(x). Also, define the pairwise margin f ij(x) def = f i(x) − f j(x) for every pair of classes (i, j). Note that the classifier outputs C(x) = i iff f ij(x) > 0 for all alternative classes j 6= i. Normally, a classifier is evaluated on the 0-1 loss `(x, y) = I[C(x) 6= y]. This paper focuses on linear classifiers and neural networks with one hidden layer. For linear classifiers, f i(x) def= W>i x, where Wi is the i th row of the parameter matrix W ∈ Rk×d.
For neural networks with one hidden layer consisting of m hidden units, the scoring function is f i(x) = V >i σ(Wx), where W ∈ Rm×d and V ∈ Rk×m are the parameters of the first and second layer, respectively, and σ is a non-linear activation function applied elementwise (e.g., for ReLUs, σ(z) = max(z, 0)). We will assume below that the gradients of σ are bounded: σ′(z) ∈ [0, 1] for all z ∈ R; this is true for ReLUs, as well as for sigmoids (with the stronger bound σ′(z) ∈ [0, 14 ]). Attack model. We are interested in classification in the presence of an attacker A : X → X that takes a (test) input x and returns a perturbation x̃. We consider attackers A that can perturb each feature xi by at most ≥ 0; formally, A(x) is required to lie in the `∞ ball B (x) def = {x̃ | ‖x̃− x‖∞ ≤ }, which is the standard constraint first proposed in Szegedy et al. (2014). Define the adversarial loss with respect to A as `A(x, y) = I[C(A(x)) 6= y]. We assume the white-box setting, where the attacker A has full knowledge of C. The optimal (untargeted) attack chooses the input that maximizes the pairwise margin of an incorrect class i over the correct class y: Aopt(x) = argmaxx̃∈B (x) maxi f
iy(x̃). For a neural network, computing Aopt is a non-convex optimization problem; heuristics are typically employed, such as the Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2015), which perturbs x based on the gradient, or the Carlini-Wagner attack, which performs iterative optimization (Carlini & Wagner, 2017b).
3 CERTIFICATE ON THE ADVERSARIAL LOSS
For ease of exposition, we first consider binary classification with classes Y = {1, 2}; the multiclass extension is discussed at the end of Section 3.3. Without loss of generality, assume the correct label for x is y = 2. Simplifying notation, let f(x) = f1(x) − f2(x) be the margin of the incorrect class over the correct class. Then Aopt(x) = argmaxx̃∈B (x) f(x̃) is the optimal attack, which is successful if f(Aopt(x)) > 0. Since f(Aopt(x)) is intractable to compute, we will try to upper bound it via a tractable relaxation.
In the rest of this section, we first review a classic result in the simple case of linear networks where a tight upper bound is based on the `1-norm of the weights (Section 3.1). We then extend this to general classifiers, in which f(Aopt(x)) can be upper bounded using the maximum `1-norm of the gradient at any point x̃ ∈ B (x) (Section 3.2). For two-layer networks, this quantity is upper bounded by the optimal value fQP(x) of a non-convex quadratic program (QP) (Section 3.3), which in turn is upper bounded by the optimal value fSDP(x) of a semidefinite program (SDP). The SDP is convex and can be computed exactly (which is important for obtainining actual certificates). To summarize, we have the following chain of inequalities:
f(A(x)) ≤ f(Aopt(x)) (3.2)
≤ f(x) + max x̃∈B (x)
‖∇f(x̃)‖1 (3.3) ≤ fQP(x) (3.3) ≤ fSDP(x), (1)
which implies that the adversarial loss `A(x) = I[f(A(x)) > 0] with respect to any attacker A is upper bounded by I[fSDP(x) > 0]. Note that for certain non-linearities such as ReLUs, ∇f(x̃) does not exist everywhere, but our analysis below holds as long as f is differentiable almost-everywhere.
3.1 LINEAR CLASSIFIERS
For (binary) linear classifiers, we have f(x) = (W1 −W2)>x, where W1,W2 ∈ Rd are the weight vectors for the two classes. For any input x̃ ∈ B (x), Hölder’s inequality with ‖x− x̃‖∞ ≤ gives:
f(x̃) = f(x) + (W1 −W2)>(x̃− x) ≤ f(x) + ‖W1 −W2‖1. (2)
Note that this bound is tight, obtained by taking Aopt(x)i = xi + sign(W1i −W2i).
3.2 GENERAL CLASSIFIERS
For more general classifiers, we cannot compute f(Aopt(x)) exactly, but motivated by the above, we can use the gradient to obtain a linear approximation g:
f(x̃) ≈ g(x̃) def= f(x) +∇f(x)> ( x̃− x ) ≤ f(x) + ‖∇f(x)‖1. (3)
Using this linear approximation to generate A(x) corresponds exactly to the Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2015). However, f is only close to g when x̃ is very close to x, and people have observed the gradient masking phenomenon (Tramèr et al., 2017; Papernot et al., 2016b) in several proposed defenses that train against approximations like g, such as saturating networks (Nayebi & Ganguli, 2017), distillation (Papernot et al., 2016c), and adversarial training (Goodfellow et al., 2015). Specifically, defenses that try to minimize ‖∇f(x)‖1 locally at the training points result in loss surfaces that exhibit sharp curvature near those points, essentially rendering the linear approximation g(x̃) meaningless. Some attacks (Carlini & Wagner, 2016; Tramèr et al., 2017) evade these defenses and witness a large f(Aopt(x)). Figure 1a provides a simple illustration.
We propose an alternative approach: use integration to obtain an exact expression for f(x̃) in terms of the gradients along the line between x and x̃:
f(x̃) = f(x) + ∫ 1 0 ∇f ( tx̃+ (1− t)x )>( x̃− x ) dt
≤ f(x) + max x̃∈B (x) ‖∇f(x̃)‖1, (4)
where the inequality follows from the fact that tx̃ + (1 − t)x ∈ B (x) for all t ∈ [0, 1]. The key difference between (4) and (3) is that we consider the gradients over the entire ballB (x) rather than only at x (Figure 1b). However, computing the RHS of (4) is intractable in general. For two-layer neural networks, this optimization has additional structure which we will exploit in the next section.
3.3 TWO-LAYER NEURAL NETWORKS
We now unpack the upper bound (4) for two-layer neural networks. Recall from Section 2 that f(x) = f1(x) − f2(x) = v>σ(Wx), where v def= V1 − V2 ∈ Rm is the difference in second-layer weights for the two classes. Let us try to bound the norm of the gradient ‖∇f(x̃)‖1 for x̃ ∈ B (x). If we apply the chain rule, we see that the only dependence on x̃ is σ′(Wx̃), the activation derivatives. We now leverage our assumption that σ′(z) ∈ [0, 1]m for all vectors z ∈ Rm, so that we can optimize over possible activation derivatives s ∈ [0, 1]m directly independent of x (note that there is potential looseness because not all such s need be obtainable via some x̃ ∈ B (x)). Therefore:
‖∇f(x̃)‖1 (i) = ‖W> diag(v)σ′(Wx̃)‖1 (ii)
≤ max s∈[0,1]m ‖W> diag(v)s‖1
(iii) = max s∈[0,1]m,t∈[−1,1]d t>W> diag(v)s, (5)
where (i) follows from the chain rule, (ii) uses the fact that σ has bounded derivatives σ′(z) ∈ [0, 1], and (iii) follows from the identity ‖z‖1 = maxt∈[−1,1]d t>z. (Note that for sigmoid networks, where σ′(z) ∈ [0, 14 ], we can strengthen the above bound by a corresponding factor of 1 4 .) Substituting the bound (5) into (4), we obtain an upper bound on the adversarial loss that we call fQP:
f(Aopt(x)) ≤ f(x) + max x̃∈B (x) ‖∇f(x̃)‖1
≤ f(x) + max s∈[0,1]m,t∈[−1,1]d
t>W> diag(v)s def = fQP(x). (6)
Unfortunately, (6) still involves a non-convex optimization problem (sinceW> diag(v) is not necessarily negative semidefinite). In fact, it is similar to the NP-hard MAXCUT problem, which requires maximizing x>Lx over x ∈ [−1, 1]d for a graph with Laplacian matrix L. While MAXCUT is NP-hard, it can be efficiently approximated, as shown by the celebrated semidefinite programming relaxation for MAXCUT in Goemans & Williamson (1995). We follow a similar approach here to obtain an upper bound on fQP(x).
First, to make our variables lie in [−1, 1]m instead of [0, 1]m, we reparametrize s to produce:
max s∈[−1,1]m,t∈[−1,1]d
1 2 t>W> diag(v)(1+ s). (7)
Next pack the variables into a vector y ∈ Rm+d+1 and the parameters into a matrix M :
y def = [ 1 t s ] M(v,W ) def = 0 0 1>W> diag(v)0 0 W> diag(v) diag(v)>W1 diag(v)>W 0 . (8) In terms of these new objects, our objective takes the form:
max y∈[−1,1](m+d+1)
1 4 y>M(v,W )y = max y∈[−1,1](m+d+1) 1 4 〈M(v,W ), yy>〉. (9)
Note that every valid vector y ∈ [−1,+1]m+d+1 satisfies the constraints yy> 0 and (yy>)jj = 1. Defining P = yy>, we obtain the following convex semidefinite relaxation of our problem:
fQP(x) ≤ fSDP(x) def = f(x) +
4 max P 0,diag(P )≤1 〈M(v,W ), P 〉 . (10)
Note that the optimization of the semidefinite program depends only on the weights v and W and does not depend on the inputs x, so it only needs to be computed once for a model (v,W ).
Semidefinite programs can be solved with off-the-shelf optimizers, although these optimizers are somewhat slow on large instances. In Section 4 we propose a fast stochastic method for training, which only requires computing the top eigenvalue of a matrix.
Generalization to multiple classes. The preceding arguments all generalize to the pairwise margins f ij , to give:
f ij(A(x)) ≤ f ijSDP(x) def = f ij(x) +
4 max P 0,diag(P )≤1
〈 M ij(V,W ), P 〉 , where (11)
M ij(V,W ) is defined as in (9) with v = Vi − Vj . The adversarial loss of any attacker, `A(x, y) = I[maxi 6=y f iy(A(x)) > 0], can be bounded using the fact that f iySDP(x) ≥ f iy(A(x)). In particular,
`A(x, y) = 0 if maxi 6=y f iy SDP(x) < 0. (12)
4 TRAINING THE CERTIFICATE
In the previous section, we proposed an upper bound (12) on the loss `A(x, y) of any attack A, based on the bound (11). Normal training with some classification loss `cls(V,W ;xn, yn) like hinge loss or cross-entropy will encourage the pairwise margin f ij(x) to be large in magnitude, but won’t necessarily cause the second term in (11) involvingM ij to be small. A natural strategy is thus to use the following regularized objective given training examples (xn, yn), which pushes down on both terms:
(W ?, V ?) = argmin W,V ∑ n `cls(V,W ;xn, yn) + ∑ i 6=j λij max P 0,diag(P )≤1 〈 M ij(V,W ), P 〉 , (13)
where λij > 0 are the regularization hyperparameters. However, computing the gradients of the above objective involves finding the optimal solution of a semidefinite program, which is slow.
Duality to the rescue. Our computational burden is lifted by the beautiful theory of duality, which provides the following equivalence between the primal maximization problem over P , and a dual minimization problem over new variables c (see Section A for details):
max P 0,diag(P )≤1
〈 M ij(V,W ), P 〉 = min cij∈RD D · λ+max ( M ij(V,W )− diag(cij) ) + 1>max(c, 0),
(14) where D = (d+m+ 1) and λ+max(B) is the maximum eigenvalue of B, or 0 if all eigenvalues are negative. This dual formulation allows us to introduce additional dual variables cij ∈ RD that are optimized at the same time as the parameters V and W , resulting in an objective that can be trained efficiently using stochastic gradient methods.
The final objective. Using (14), we end up optimizing the following training objective:
(W ?, V ?, c?) = argmin W,V,c ∑ n `cls(V,W ;xn, yn) + ∑ i 6=j λij · [ D · λ+max(M ij(V,W )− diag(cij)) + 1>max(cij , 0) ] .
(15)
The objective in (15) can be optimized efficiently. The most expensive operation is λ+max, which requires computing the maximum eigenvector of the matrix M ij − diag(cij) in order to take gradients. This can be done efficiently using standard implementations of iterative methods like Lanczos. Further implementation details (including tuning of λij) are presented in Section 6.3.
Dual certificate of robustness. The dual formulation is also useful because any value of the dual is an upper bound on the optimal value of the primal. Specifically, if (W [t], V [t], c[t]) are the parameters at iteration t of training, then
f ij(A(x)) ≤ f(x) + 4
[ D · λ+max ( M ij(V [t],W [t])− diag(c[t]ij) ) + 1>max(c[t]ij , 0) ] , (16)
for any attack A. As we train the network, we obtain a quick upper bound on the worst-case adversarial loss directly from the regularization loss, without having to optimize an SDP each time.
5 OTHER UPPER BOUNDS
In Section 3, we described a function f ijSDP that yields an efficient upper bound on the adversarial loss, which we obtained using convex relaxations. One could consider other simple ways to upper bound the loss; we describe here two common ones based on the spectral and Frobenius norms.
Spectral bound: Note that v>(σ(Wx̃) − σ(Wx)) ≤ ‖v‖2‖σ(Wx̃) − σ(Wx)‖2 by CauchySchwarz. Moreover, since σ is contractive, ‖σ(Wx̃) − σ(Wx)‖2 ≤ ‖W (x̃ − x)‖2 ≤ ‖W‖2‖x̃ − x‖2 ≤ √ d‖W‖2, where ‖W‖2 is the spectral norm (maximum singular value) of W . This yields the following upper bound that we denote by fspectral:
f ij(A(x)) ≤ f ijspectral(x) def = f ij(x) + √ d‖W‖2‖Vi − Vj‖2. (17)
This measure of vulnerability to adversarial examples based on the spectral norms of the weights of each layer is considered in Szegedy et al. (2014) and Cisse et al. (2017).
Frobenius bound: For ease in training, often the Frobenius norm is regularized (weight decay) instead of the spectral norm. Since ‖W‖F ≥ ‖W‖2, we get a corresponding upper bound ffrobenius:
f ij(A(x)) ≤ f ijfrobenius(x) = f ij(x) + √ d‖W‖F‖Vi − Vj‖2. (18)
In Section 6, we empirically compare our proposed bound using f ijSDP to these two upper bounds.
6 EXPERIMENTS
We evaluated our method on the MNIST dataset of handwritten digits, where the task is to classify images into one of ten classes. Our results can be summarized as follows: First, in Section 6.1, we show that our certificates of robustness are tighter than those based on simpler methods such as Frobenius and spectral bounds (Section 5), but our bounds are still too high to be meaningful for general networks. Then in Section 6.2, we show that by training on the certificates, we obtain networks with much better bounds and hence meaningful robustness. This reflects an important point: while accurately analyzing the robustness of an arbitrary network is hard, training the certificate jointly leads to a network that is robust and certifiably so. In Section 6.3, we present implementation details, design choices, and empirical observations that we made while implementing our method.
Networks. In this work, we focus on two layer networks. In all our experiments, we used neural networks with m = 500 hidden units, and TensorFlow’s implementation of Adam (Kingma & Ba, 2014) as the optimizer; we considered networks with more hidden units, but these did not substantially improve accuracy. We experimented with both the multiclass hinge loss and cross-entropy. All hyperparameters (including the choice of loss function) were tuned based on the error of the Projected Gradient Descent (PGD) attack (Madry et al., 2017) at = 0.1; we report the hyperparameter settings below. We considered the following training objectives providing 5 different networks:
1. Normal training (NT-NN). Cross-entropy loss and no explicit regularization. 2. Frobenius norm regularization (Fro-NN). Hinge loss and a regularizer λ(‖W‖F + ‖v‖2)
with λ = 0.08.
3. Spectral norm regularization (Spe-NN). Hinge loss and a regularizer λ(‖W‖2 + ‖v‖2) with λ = 0.09.
4. Adversarial training (AT-NN). Cross-entropy with the adversarial loss against PGD as a regularizer, with the regularization parameter set to 0.5. We found that this regularized loss works better than optimizing only the adversarial loss, which is the defense proposed in Madry et al. (2017). We set the step size of the PGD adversary to 0.1, number of iterations to 40, and perturbation size to 0.3.
5. Proposed training objective (SDP-NN). Dual SDP objective described in Equation 15 of Section 4. Implementation details and hyperparameter values are detailed in Section 6.3.
Evaluating upper bounds. Below we will consider various upper bounds on the adversarial loss `Aopt (based on our method, as well as the Frobenius and spectral bounds described in Section 5). Ideally we would compare these to the ground-truth adversarial loss `Aopt , but computing this exactly is difficult. Therefore, we compare upper bounds on the adversarial loss with a lower bound on `Aopt instead. The loss of any attack provides a valid lower bound and we consider the strong Projected Gradient Descent (PGD) attack run against the cross-entropy loss, starting from a random point in B (x), with 5 random restarts. We observed that PGD against hinge loss did not work well, so we used cross-entropy even for attacking networks trained with the hinge loss.
6.1 QUALITY OF THE UPPER BOUND
For each of the five networks described above, we computed upper bounds on the 0-1 loss based on our certificate (which we refer to as the “SDP bound” in this section), as well as the Frobenius and spectral bounds described in Section 5. While Section 4 provides a procedure for efficiently obtaining an SDP bound as a result of training, for networks not trained with our method we need to solve an SDP at the end of training to obtain certificates. Fortunately, this only needs to be done once for every pair of classes. In our experiments, we use the modeling toolbox YALMIP (Löfberg, 2004) with Sedumi (Sturm, 1999) as a backend to solve the SDPs, using the dual form (14); this took roughly 10 minutes per SDP (around 8 hours in total for a given model).
In Figure 2, we display average values of the different upper bounds over the 10, 000 test examples, as well as the corresponding lower bound from PGD. We find that our bound is tighter than the Frobenius and spectral bounds for all the networks considered, but its tightness relative to the PGD lower bound varies across the networks. For instance, our bound is relatively tight on Fro-NN, but unfortunately Fro-NN is not very robust against adversarial examples (the PGD attack exhibits large
error). In contrast, the adversarially trained network AT-NN does appear to be robust to attacks, but our certificate, despite being much tighter than the Frobenius and spectral bounds, is far away from the PGD lower bound. The only network that is both robust and has relatively tight upper bounds is SDP-NN, which was explicitly trained to be both robust and certifiable as described in Section 4; we examine this network and the effects of training in more detail in the next subsection.
6.2 EVALUATING PROPOSED TRAINING OBJECTIVE.
In the previous section, we saw that the SDP bound, while being tighter than simpler upper bounds, could still be quite loose on arbitrary networks. However, optimizing against the SDP certificate seemed to make the certificate tighter. In this section, we explore the effect of different optimization objectives in more detail. First, we plot on a single axis the best upper bound (i.e., the SDP bound) and the lower bound (from PGD) on the adversarial loss obtained with each of the five training objectives discussed above. This is given in Figure 3a.
Neither spectral nor Frobenius norm regularization seems to be helpful for encouraging adversarial robustness—the actual performance of those networks against the PGD attack is worse than the upper bound for SDP-NN against all attacks. This shows that the SDP certificate actually provides a useful training objective for encouraging robustness compared to other regularizers.
Separately, we can ask whether SDP-NN is robust to actual attacks. We explore the robustness of our network in Figure 3b, where we plot the performance of SDP-NN against 3 attacks—the PGD attack from before, the Carlini-Wagner attack (Carlini & Wagner, 2017b) (another strong attack), and the weaker Fast Gradient Sign Method (FGSM) baseline. We see substantial robustness against all 3 attacks, even though our method was not explicitly trained with any of them in mind.
Next, we compare to other bounds reported in the literature. A rough ceiling is given by the network of Madry et al. (2017), which is a relatively large four-layer convolutional network adversarially trained against PGD. While this network has no accompanying certificate of robustness, it was evaluated against a number of attack strategies and had worst-case error 11% at = 0.3. Another set of numbers comes from Carlini et al. (2017), who use formal verification methods to compute Aopt exactly on 10 input examples for a small (72-node) variant of the Madry et al. network. The authors reported to us that this network misclassifies 6 out of 10 examples at = 0.05 (we note that 4 out of 10 of these were misclassified to start with, but 3 of the 4 can also be flipped to a different wrong class with some < 0.07).
At the value = 0.1 for which it was tuned, SDP-NN has error 16% against the PGD attack, and an upper bound of 35% error against any attack. This is substantially better than the small 72-node network, but also much worse than the full Madry et al. network. How much of the latter looseness comes from conservatism in our method, versus the fact that our network has only two layers? We can get some idea by considering the AT-NN network, which was trained similarly to Madry et al., but uses the same architecture as SDP-NN. From Figure 3a, we see that the error of SDP-NN against PGD (16%) is not much worse than that of AT-NN (11%), even though AT-NN was explicitly trained against the PGD attack. This suggests that most of the gap comes from the smaller network depth,
rather than from conservatism in the SDP bound. We are currently in the process of extending our approach to deeper networks, and optimistic about obtaining improved bounds with such networks.
Finally, we compare with the approach proposed in Kolter & Wong (2017) whose work appeared shortly after an initial version of our paper. They provide an upper bound on the adversarial loss using linear programs (LP) followed by a method to efficiently train networks to minimize this upper bound. In order to compare with SDP-NN, the authors provided us with a network with the same architecture as SDP-NN, but trained using their LP based objective. We call this network LP-NN. Table 1 shows that LP-NN and SDP-NN are comparable in terms of their robustness against PGD, and the robustness guarantees that they come with.
Interestingly, the SDP and LP approaches provide vacuous bounds for networks not trained to minimize the respective upper bounds (though these networks are indeed robust). This suggests that these two approaches are comparable, but complementary. Finally, we note that in contrast to this work, the approach of Kolter & Wong (2017) extends to deeper networks, which allows them to train a four-layer CNN with a provable upper bound on adversarial error of 8.4% error.
6.3 IMPLEMENTATION DETAILS
We implemented our training objective in TensorFlow, and implemented λ+max as a custom operator using SciPy’s implementation of the Lanczos algorithm for fast top eigenvector computation; occasionally Lanczos fails to converge due to a small eigen-gap, in which case we back off to a full SVD. We used hinge loss as the classification loss, and decayed the learning rate in steps from 10−3 to 10−5, decreasing by a factor of 10 every 30 epochs. Each gradient step involves computing top eigenvectors for 45 different matrices, one for each pair of classes (i, j). In order to speed up computation, for each update, we randomly pick it and only compute gradients for pairs (it, j), j 6= it, requiring only 9 top eigenvector computations in each step.
For the regularization parameters λij , the simplest idea is to set them all equal to the same value; this leads to the unweighted regularization scheme where λij = λ for all pairs (i, j). We tuned λ to 0.05, which led to reasonably good bounds. However, we observed that certain pairs of classes tended to have larger margins f ij(x) than other classes, which meant that certain label pairs appeared in the maximum of (12) much more often. That led us to consider a weighted regularization scheme with λij = wijλ, where wij is the fraction of training points for which the the label i (or j) appears as the maximizing term in (12). We updated the values of these weights every 20 epochs. Figure 4a compares the PGD lower bound and SDP upper bound for the unweighted and weighted networks. The weighted network is better than the unweighted network for both the lower and upper bounds.
Finally, we saw in Equation 16 of Section 4 that the dual variables cij provide a quick-to-compute certificate of robustness. Figure 4b shows that the certificates provided by these dual variables are very close to what we would obtain by fully optimizing the semidefinite programs. These dual certificates made it easy to track robustness across epochs of training and to tune hyperparameters.
7 DISCUSSION
In this work, we proposed a method for producing certificates of robustness for neural networks, and for training against these certificates to obtain networks that are provably robust against adversaries.
Related work. In parallel and independent work, Kolter & Wong (2017) also provide provably robust networks against `∞ perturbations by using convex relaxations. While our approach uses a single semidefinite program to compute an upper bound on the adversarial loss, Kolter & Wong
(2017) use separate linear programs for every data point, and apply their method to networks of depth up to four. In theory, neither bound is strictly tighter than the other, and our experiments (Table 1) suggest that the two bounds are complementary. Combining the approaches seems to be a promising future direction.
Katz et al. (2017a) and the follow-up Carlini et al. (2017) also provide certificates of robustness for neural networks against `∞ perturbations. That work uses SMT solvers, which are a tool from the formal verification literature. The SMT solver can answer the binary question “Is there an adversarial example within distance of the input x?”, and is correct whenever it terminates. The main drawback of SMT and similar formal verification methods is that they are slow—they have worst-case exponential-time scaling in the size of the network; moreover, to use them during training would require a separate search for each gradient step. Huang et al. (2017) use SMT solvers and are able to analyze state-of-the-art networks on MNIST, but they make various approximations such that their numbers are not true upper bounds.
Bastani et al. (2016) provide tractable certificates but require to be small enough to ensure that the entire `∞ ball around an input lies within the same linear region. For the networks and values of that we consider in our paper, we found that this condition did not hold. Recently, Hein & Andriushchenko (2017) proposed a bound for guaranteeing robustness to `p-norm perturbations, based on the maximum pp−1 -norm of the gradient in the -ball around the inputs. Hein & Andriushchenko (2017) show how to efficiently compute this bound for p = 2, as opposed to our work which focuses on `∞ and requires different techniques to achieve scalability.
Madry et al. (2017) perform adversarial training against PGD on the MNIST and CIFAR-10 datasets, obtaining networks that they suggest are “secure against first-order adversaries”. However, this is based on an empirical observation that PGD is nearly-optimal among gradient-based attacks, and does not correspond to any formal robustness guarantee.
Finally, the notion of a certificate appears in the theory of convex optimization, but means something different in that context; specifically, it corresponds to a proof that a point is near the optimum of a convex function, whereas here our certificates provide upper bounds on non-convex functions. Additionally, while robust optimization (Bertsimas et al., 2011) provides a tool for optimizing objectives with robustness constraints, applying it directly would involve the same intractable optimization for Aopt that we deal with here.
Other approaches to verification. While they have not been explored in the context of neural networks, there are approaches in the control theory literature for verifying robustness of dynamical systems, based on Lyapunov functions (Lyapunov, 1892; 1992). We can think of the activations in a neural network as the evolution of a time-varying dynamical system, and attempt to prove stability around a trajectory of this system (Tedrake et al., 2010; Tobenkin et al., 2011). Such methods typically use sum-of-squares verification (Papachristodoulou & Prajna, 2002; 2005; Parrilo, 2003) and are restricted to relatively low-dimensional dynamical systems, but could plausibly scale to
larger settings. Another approach is to construct families of networks that are provably robust a priori, which would remove the need to verify robustness of the learned model; to our knowledge this has not been done for any expressive model families.
Adversarial examples and secure ML. There has been a great deal of recent work on the security of ML systems; we provide only a sampling here, and refer the reader to Barreno et al. (2010), Biggio et al. (2014a), Papernot et al. (2016b), and Gardiner & Nagaraja (2016) for some recent surveys.
Adversarial examples for neural networks were first discovered by Szegedy et al. (2014), and since then a number of attacks and defenses have been proposed. We have already discussed gradientbased methods as well as defenses based on adversarial training. There are also other attacks based on, e.g., saliency maps (Papernot et al., 2016a), KL divergence (Miyato et al., 2015), and elastic net optimization (Chen et al., 2017); many of these attacks are collated in the cleverhans repository (Goodfellow et al., 2016). For defense, rather than making networks robust to adversaries, some work has focused on simply detecting adversarial examples. However, Carlini & Wagner (2017a) recently showed that essentially all known detection methods can be subverted by strong attacks.
As explained in Barreno et al. (2010), there are a number of different attack models beyond the testtime attacks considered here, based on different attacker goals and capabilities. For instance, one can consider data poisoning attacks, where an attacker modifies the training set in an effort to affect test-time performance. Newsome et al. (2006), Laskov & Šrndic̀ (2014), and Biggio et al. (2014b) have demonstrated poisoning attacks against real-world systems.
Other types of certificates. Certificates of performance for machine learning systems are desirable in a number of settings. This includes verifying safety properties of air traffic control systems (Katz et al., 2017a;b) and self-driving cars (O’Kelly et al., 2016; 2017), as well as security applications such as robustness to training time attacks (Steinhardt et al., 2017). More broadly, certificates of performance are likely necessary for deploying machine learning systems in critical infrastructure such as internet packet routing (Winstein & Balakrishnan, 2013; Sivaraman et al., 2014). In robotics, certificates of stability are routinely used both for safety verification (Lygeros et al., 1999; Mitchell et al., 2005) and controller synthesis (Başar & Bernhard, 2008; Tedrake et al., 2010).
In traditional verification work, Rice’s theorem (Rice, 1953) is a strong impossibility result essentially stating that most properties of most programs are undecidable. Similarly, we should expect that verifying robustness for arbitrary neural networks is hard. However, the results in this work suggest that it is possible to learn neural networks that are amenable to verification, in the same way that it is possible to write programs that can be formally verified. Optimistically, given expressive enough certification methods and model families, as well as strong enough specifications of robustness, one could even hope to train vector representations of natural images with strong robustness properties, thus finally closing the chapter on adversarial vulnerabilities in the visual domain.
Reproducibility. All code, data and experiments for this paper are available on the Codalab platform at https://worksheets.codalab.org/worksheets/ 0xa21e794020bb474d8804ec7bc0543f52/.
Acknowledgements. This work was partially supported by a Future of Life Institute Research Award and Open Philanthrophy Project Award. JS was supported by a Fannie & John Hertz Foundation Fellowship and an NSF Graduate Research Fellowship. We are also grateful to Guy Katz, Zico Kolter and Eric Wong for providing relevant experimental results for comparison, as well as to the anonymous reviewers for useful feedback and references.
A DUALITY
In this section we justify the duality relation (14). Recall that the primal program is
maximize 〈M,P 〉 (19) subject to P 0,diag(P ) ≤ 1.
Rather than taking the dual directly, we first add the redundant constraint tr(P ) ≤ d+m+ 1 (it is redundant because the SDP is in d+m+ 1 dimensions and diag(P ) ≤ 1). This yields
maximize 〈M,P 〉 (20) subject to P 0,diag(P ) ≤ 1, tr(P ) ≤ d+m+ 1.
We now form the Lagrangian for the constraints diag(P ) ≤ 1, leaving the other two constraints as-is. This yields the equivalent optimization problem
maximize min c≥0 〈M,P 〉+ c>(1− diag(P )) (21)
subject to P 0, tr(P ) ≤ d+m+ 1.
Now, we apply minimax duality to swap the order of min and max; the value of (21) is thus equal to
minimize max P 0,
tr(P )≤d+m+1
〈M,P 〉+ c>(1− diag(P )) (22)
subject to c ≥ 0.
The inner maximum can be simplified as 1>c+(d+m+1)· (
max P 0,tr(P )≤1
〈M−diag(c), P 〉 ) = 1>c+(d+m+1)λ+max(M−diag(c)). (23)
Therefore, (22) simplifies to
minimize 1>c+ (d+m+ 1)λ+max(M − diag(c)) (24) subject to c ≥ 0.
This is almost the form given in (14), except that c is constrained to be non-negative and we have 1>c instead of 1>max(c, 0). However, note that for the λ+max term, it is always better for c to be larger; therefore, replacing c with max(c, 0) means that the optimal value of c will always be nonnegative, thus allowing us to drop the c ≥ 0 constraint and optimize c in an unconstrained manner. This finally yields the claimed duality relation (14). | 1. What is the focus of the paper regarding neural network security?
2. What are the strengths of the proposed approach, particularly in terms of its mathematical formulation and empirical evaluation?
3. Are there any concerns or limitations regarding the method's effectiveness against various attacks?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Is there any suggestion for improvement, such as comparing the approach with other recent works in the field? | Review | Review
The authors propose a new defense against security attacks on neural networks. The attack model involves a standard l_inf norm constraint. Remarkably, the approach outputs a security certificate (security guarantee) on the algorithm, which makes it appealing for security use in practice. Furthermore, the authors include an approximation of the certificate into their objective function, thus training networks that are more robust against attacks. The approach is evaluated for several attacks on MNIST data.
First of all, the paper is very well written and structured. As standard in the security community, the attack model is precisely formalized (I find this missing in several other ML papers on the topic). The certificate is derived with rigorous and sound math. An innovative approximation based on insight into a relation to the MAXCUT algorithm is shown. An innovative training criterion based on that certificate is proposed. Both the performance of the new training objective and the tightness of the cerificate are analyzed empirically showing that good agreement with the theory and good results in terms of robustness against several attacks.
In summary, this is an innovative paper that treats the subject with rigorous mathematical formalism and is successful in the empirical evaluation. For me, it is a clear accept. The only drawback I see is the missing theoretical and empirical comparison to the recent NIPS 2017 paper by Hein et al. |
ICLR | Title
Certified Defenses against Adversarial Examples
Abstract
While neural networks have achieved high accuracy on standard image classification benchmarks, their accuracy drops to nearly zero in the presence of small adversarial perturbations to test inputs. Defenses based on regularization and adversarial training have been proposed, but often followed by new, stronger attacks that defeat these defenses. Can we somehow end this arms race? In this work, we study this problem for neural networks with one hidden layer. We first propose a method based on a semidefinite relaxation that outputs a certificate that for a given network and test input, no attack can force the error to exceed a certain value. Second, as this certificate is differentiable, we jointly optimize it with the network parameters, providing an adaptive regularizer that encourages robustness against all attacks. On MNIST, our approach produces a network and a certificate that no attack that perturbs each pixel by at most = 0.1 can cause more than 35% test error.
1 INTRODUCTION
Despite the impressive (and sometimes even superhuman) accuracies of machine learning on diverse tasks such as object recognition (He et al., 2015), speech recognition (Xiong et al., 2016), and playing Go (Silver et al., 2016), classifiers still fail catastrophically in the presence of small imperceptible but adversarial perturbations (Szegedy et al., 2014; Goodfellow et al., 2015; Kurakin et al., 2016). In addition to being an intriguing phenonemon, the existence of such “adversarial examples” exposes a serious vulnerability in current ML systems (Evtimov et al., 2017; Sharif et al., 2016; Carlini et al., 2016). While formally defining an “imperceptible” perturbation is difficult, a commonly-used proxy is perturbations that are bounded in `∞-norm (Goodfellow et al., 2015; Madry et al., 2017; Tramèr et al., 2017); we focus on this attack model in this paper, as even for this proxy it is not known how to construct high-performing image classifiers that are robust to perturbations.
While a proposed defense (classifier) is often empirically shown to be successful against the set of attacks known at the time, new stronger attacks are subsequently discovered that render the defense useless. For example, defensive distillation (Papernot et al., 2016c) and adversarial training against the Fast Gradient Sign Method (Goodfellow et al., 2015) were two defenses that were later shown to be ineffective against stronger attacks (Carlini & Wagner, 2016; Tramèr et al., 2017). In order to break this arms race between attackers and defenders, we need to come up with defenses that are successful against all attacks within a certain class.
However, even computing the worst-case error for a given network against all adversarial perturbations in an `∞-ball is computationally intractable. One common approximation is to replace the worst-case loss with the loss from a given heuristic attack strategy, such as the Fast Gradient Sign Method (Goodfellow et al., 2015) or more powerful iterative methods (Carlini & Wagner, 2017a; Madry et al., 2017). Adversarial training minimizes the loss with respect to these heuristics. However, this essentially minimizes a lower bound on the worst-case loss, which is problematic since points where the bound is loose have disproportionately lower objective values, which could lure and mislead an optimizer. Indeed, while adversarial training often provides robustness against a specific attack, it often fails to generalize to new attacks, as described above. Another approach is to compute the worst-case perturbation exactly using discrete optimization (Katz et al., 2017a; Carlini
et al., 2017). Currently, these approaches can take up to several hours or longer to compute the loss for a single example even for small networks with a few hundred hidden units. Training a network would require performing this computation in the inner loop, which is infeasible.
In this paper, we introduce an approach that avoids both the inaccuracy of lower bounds and the intractability of exact computation, by computing an upper bound on the worst-case loss for neural networks with one hidden layer, based on a semidefinite relaxation that can be computed efficiently. This upper bound serves as a certificate of robustness against all attacks for a given network and input. Minimizing an upper bound is safer than minimizing a lower bound, because points where the bound is loose have disproportionately higher objective values, which the optimizer will tend to avoid. Furthermore, our certificate of robustness, by virtue of being differentiable, is trainable—it can be optimized at training time jointly with the network, acting as a regularizer that encourages robustness against all `∞ attacks.
In summary, we are the first (along with the concurrent work of Kolter & Wong (2017)) to demonstrate a certifiable, trainable, and scalable method for defending against adversarial examples on two-layer networks. We train a network on MNIST whose test error on clean data is 4.2%, and which comes with a certificate that no attack can misclassify more than 35% of the test examples using `∞ perturbations of size = 0.1.
Notation. For a vector z ∈ Rn, we use zi to denote the ith coordinate of z. For a matrix Z ∈ Rm×n, Zi denotes the ith row. For any activation function σ : R → R (e.g., sigmoid, ReLU) and a vector z ∈ Rn, σ(z) is a vector in Rn with σ(z)i = σ(zi) (non-linearity is applied element-wise). We use B (z) to denote the `∞ ball of radius around z ∈ Rd: B (z) = {z̃ | |z̃i − zi| ≤ for i = 1, 2, . . . d}. Finally, we denote the vector of all zeros by 0 and the vector of all ones by 1.
2 SETUP
Score-based classifiers. Our goal is to learn a mapping C : X → Y , where X = Rd is the input space (e.g., images) and Y = {1, . . . , k} is the set of k class labels (e.g., object categories). Assume C is driven by a scoring function f i : X → R for all classes i ∈ Y , where the classifier chooses the class with the highest score: C(x) = argmaxi∈Y f i(x). Also, define the pairwise margin f ij(x) def = f i(x) − f j(x) for every pair of classes (i, j). Note that the classifier outputs C(x) = i iff f ij(x) > 0 for all alternative classes j 6= i. Normally, a classifier is evaluated on the 0-1 loss `(x, y) = I[C(x) 6= y]. This paper focuses on linear classifiers and neural networks with one hidden layer. For linear classifiers, f i(x) def= W>i x, where Wi is the i th row of the parameter matrix W ∈ Rk×d.
For neural networks with one hidden layer consisting of m hidden units, the scoring function is f i(x) = V >i σ(Wx), where W ∈ Rm×d and V ∈ Rk×m are the parameters of the first and second layer, respectively, and σ is a non-linear activation function applied elementwise (e.g., for ReLUs, σ(z) = max(z, 0)). We will assume below that the gradients of σ are bounded: σ′(z) ∈ [0, 1] for all z ∈ R; this is true for ReLUs, as well as for sigmoids (with the stronger bound σ′(z) ∈ [0, 14 ]). Attack model. We are interested in classification in the presence of an attacker A : X → X that takes a (test) input x and returns a perturbation x̃. We consider attackers A that can perturb each feature xi by at most ≥ 0; formally, A(x) is required to lie in the `∞ ball B (x) def = {x̃ | ‖x̃− x‖∞ ≤ }, which is the standard constraint first proposed in Szegedy et al. (2014). Define the adversarial loss with respect to A as `A(x, y) = I[C(A(x)) 6= y]. We assume the white-box setting, where the attacker A has full knowledge of C. The optimal (untargeted) attack chooses the input that maximizes the pairwise margin of an incorrect class i over the correct class y: Aopt(x) = argmaxx̃∈B (x) maxi f
iy(x̃). For a neural network, computing Aopt is a non-convex optimization problem; heuristics are typically employed, such as the Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2015), which perturbs x based on the gradient, or the Carlini-Wagner attack, which performs iterative optimization (Carlini & Wagner, 2017b).
3 CERTIFICATE ON THE ADVERSARIAL LOSS
For ease of exposition, we first consider binary classification with classes Y = {1, 2}; the multiclass extension is discussed at the end of Section 3.3. Without loss of generality, assume the correct label for x is y = 2. Simplifying notation, let f(x) = f1(x) − f2(x) be the margin of the incorrect class over the correct class. Then Aopt(x) = argmaxx̃∈B (x) f(x̃) is the optimal attack, which is successful if f(Aopt(x)) > 0. Since f(Aopt(x)) is intractable to compute, we will try to upper bound it via a tractable relaxation.
In the rest of this section, we first review a classic result in the simple case of linear networks where a tight upper bound is based on the `1-norm of the weights (Section 3.1). We then extend this to general classifiers, in which f(Aopt(x)) can be upper bounded using the maximum `1-norm of the gradient at any point x̃ ∈ B (x) (Section 3.2). For two-layer networks, this quantity is upper bounded by the optimal value fQP(x) of a non-convex quadratic program (QP) (Section 3.3), which in turn is upper bounded by the optimal value fSDP(x) of a semidefinite program (SDP). The SDP is convex and can be computed exactly (which is important for obtainining actual certificates). To summarize, we have the following chain of inequalities:
f(A(x)) ≤ f(Aopt(x)) (3.2)
≤ f(x) + max x̃∈B (x)
‖∇f(x̃)‖1 (3.3) ≤ fQP(x) (3.3) ≤ fSDP(x), (1)
which implies that the adversarial loss `A(x) = I[f(A(x)) > 0] with respect to any attacker A is upper bounded by I[fSDP(x) > 0]. Note that for certain non-linearities such as ReLUs, ∇f(x̃) does not exist everywhere, but our analysis below holds as long as f is differentiable almost-everywhere.
3.1 LINEAR CLASSIFIERS
For (binary) linear classifiers, we have f(x) = (W1 −W2)>x, where W1,W2 ∈ Rd are the weight vectors for the two classes. For any input x̃ ∈ B (x), Hölder’s inequality with ‖x− x̃‖∞ ≤ gives:
f(x̃) = f(x) + (W1 −W2)>(x̃− x) ≤ f(x) + ‖W1 −W2‖1. (2)
Note that this bound is tight, obtained by taking Aopt(x)i = xi + sign(W1i −W2i).
3.2 GENERAL CLASSIFIERS
For more general classifiers, we cannot compute f(Aopt(x)) exactly, but motivated by the above, we can use the gradient to obtain a linear approximation g:
f(x̃) ≈ g(x̃) def= f(x) +∇f(x)> ( x̃− x ) ≤ f(x) + ‖∇f(x)‖1. (3)
Using this linear approximation to generate A(x) corresponds exactly to the Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2015). However, f is only close to g when x̃ is very close to x, and people have observed the gradient masking phenomenon (Tramèr et al., 2017; Papernot et al., 2016b) in several proposed defenses that train against approximations like g, such as saturating networks (Nayebi & Ganguli, 2017), distillation (Papernot et al., 2016c), and adversarial training (Goodfellow et al., 2015). Specifically, defenses that try to minimize ‖∇f(x)‖1 locally at the training points result in loss surfaces that exhibit sharp curvature near those points, essentially rendering the linear approximation g(x̃) meaningless. Some attacks (Carlini & Wagner, 2016; Tramèr et al., 2017) evade these defenses and witness a large f(Aopt(x)). Figure 1a provides a simple illustration.
We propose an alternative approach: use integration to obtain an exact expression for f(x̃) in terms of the gradients along the line between x and x̃:
f(x̃) = f(x) + ∫ 1 0 ∇f ( tx̃+ (1− t)x )>( x̃− x ) dt
≤ f(x) + max x̃∈B (x) ‖∇f(x̃)‖1, (4)
where the inequality follows from the fact that tx̃ + (1 − t)x ∈ B (x) for all t ∈ [0, 1]. The key difference between (4) and (3) is that we consider the gradients over the entire ballB (x) rather than only at x (Figure 1b). However, computing the RHS of (4) is intractable in general. For two-layer neural networks, this optimization has additional structure which we will exploit in the next section.
3.3 TWO-LAYER NEURAL NETWORKS
We now unpack the upper bound (4) for two-layer neural networks. Recall from Section 2 that f(x) = f1(x) − f2(x) = v>σ(Wx), where v def= V1 − V2 ∈ Rm is the difference in second-layer weights for the two classes. Let us try to bound the norm of the gradient ‖∇f(x̃)‖1 for x̃ ∈ B (x). If we apply the chain rule, we see that the only dependence on x̃ is σ′(Wx̃), the activation derivatives. We now leverage our assumption that σ′(z) ∈ [0, 1]m for all vectors z ∈ Rm, so that we can optimize over possible activation derivatives s ∈ [0, 1]m directly independent of x (note that there is potential looseness because not all such s need be obtainable via some x̃ ∈ B (x)). Therefore:
‖∇f(x̃)‖1 (i) = ‖W> diag(v)σ′(Wx̃)‖1 (ii)
≤ max s∈[0,1]m ‖W> diag(v)s‖1
(iii) = max s∈[0,1]m,t∈[−1,1]d t>W> diag(v)s, (5)
where (i) follows from the chain rule, (ii) uses the fact that σ has bounded derivatives σ′(z) ∈ [0, 1], and (iii) follows from the identity ‖z‖1 = maxt∈[−1,1]d t>z. (Note that for sigmoid networks, where σ′(z) ∈ [0, 14 ], we can strengthen the above bound by a corresponding factor of 1 4 .) Substituting the bound (5) into (4), we obtain an upper bound on the adversarial loss that we call fQP:
f(Aopt(x)) ≤ f(x) + max x̃∈B (x) ‖∇f(x̃)‖1
≤ f(x) + max s∈[0,1]m,t∈[−1,1]d
t>W> diag(v)s def = fQP(x). (6)
Unfortunately, (6) still involves a non-convex optimization problem (sinceW> diag(v) is not necessarily negative semidefinite). In fact, it is similar to the NP-hard MAXCUT problem, which requires maximizing x>Lx over x ∈ [−1, 1]d for a graph with Laplacian matrix L. While MAXCUT is NP-hard, it can be efficiently approximated, as shown by the celebrated semidefinite programming relaxation for MAXCUT in Goemans & Williamson (1995). We follow a similar approach here to obtain an upper bound on fQP(x).
First, to make our variables lie in [−1, 1]m instead of [0, 1]m, we reparametrize s to produce:
max s∈[−1,1]m,t∈[−1,1]d
1 2 t>W> diag(v)(1+ s). (7)
Next pack the variables into a vector y ∈ Rm+d+1 and the parameters into a matrix M :
y def = [ 1 t s ] M(v,W ) def = 0 0 1>W> diag(v)0 0 W> diag(v) diag(v)>W1 diag(v)>W 0 . (8) In terms of these new objects, our objective takes the form:
max y∈[−1,1](m+d+1)
1 4 y>M(v,W )y = max y∈[−1,1](m+d+1) 1 4 〈M(v,W ), yy>〉. (9)
Note that every valid vector y ∈ [−1,+1]m+d+1 satisfies the constraints yy> 0 and (yy>)jj = 1. Defining P = yy>, we obtain the following convex semidefinite relaxation of our problem:
fQP(x) ≤ fSDP(x) def = f(x) +
4 max P 0,diag(P )≤1 〈M(v,W ), P 〉 . (10)
Note that the optimization of the semidefinite program depends only on the weights v and W and does not depend on the inputs x, so it only needs to be computed once for a model (v,W ).
Semidefinite programs can be solved with off-the-shelf optimizers, although these optimizers are somewhat slow on large instances. In Section 4 we propose a fast stochastic method for training, which only requires computing the top eigenvalue of a matrix.
Generalization to multiple classes. The preceding arguments all generalize to the pairwise margins f ij , to give:
f ij(A(x)) ≤ f ijSDP(x) def = f ij(x) +
4 max P 0,diag(P )≤1
〈 M ij(V,W ), P 〉 , where (11)
M ij(V,W ) is defined as in (9) with v = Vi − Vj . The adversarial loss of any attacker, `A(x, y) = I[maxi 6=y f iy(A(x)) > 0], can be bounded using the fact that f iySDP(x) ≥ f iy(A(x)). In particular,
`A(x, y) = 0 if maxi 6=y f iy SDP(x) < 0. (12)
4 TRAINING THE CERTIFICATE
In the previous section, we proposed an upper bound (12) on the loss `A(x, y) of any attack A, based on the bound (11). Normal training with some classification loss `cls(V,W ;xn, yn) like hinge loss or cross-entropy will encourage the pairwise margin f ij(x) to be large in magnitude, but won’t necessarily cause the second term in (11) involvingM ij to be small. A natural strategy is thus to use the following regularized objective given training examples (xn, yn), which pushes down on both terms:
(W ?, V ?) = argmin W,V ∑ n `cls(V,W ;xn, yn) + ∑ i 6=j λij max P 0,diag(P )≤1 〈 M ij(V,W ), P 〉 , (13)
where λij > 0 are the regularization hyperparameters. However, computing the gradients of the above objective involves finding the optimal solution of a semidefinite program, which is slow.
Duality to the rescue. Our computational burden is lifted by the beautiful theory of duality, which provides the following equivalence between the primal maximization problem over P , and a dual minimization problem over new variables c (see Section A for details):
max P 0,diag(P )≤1
〈 M ij(V,W ), P 〉 = min cij∈RD D · λ+max ( M ij(V,W )− diag(cij) ) + 1>max(c, 0),
(14) where D = (d+m+ 1) and λ+max(B) is the maximum eigenvalue of B, or 0 if all eigenvalues are negative. This dual formulation allows us to introduce additional dual variables cij ∈ RD that are optimized at the same time as the parameters V and W , resulting in an objective that can be trained efficiently using stochastic gradient methods.
The final objective. Using (14), we end up optimizing the following training objective:
(W ?, V ?, c?) = argmin W,V,c ∑ n `cls(V,W ;xn, yn) + ∑ i 6=j λij · [ D · λ+max(M ij(V,W )− diag(cij)) + 1>max(cij , 0) ] .
(15)
The objective in (15) can be optimized efficiently. The most expensive operation is λ+max, which requires computing the maximum eigenvector of the matrix M ij − diag(cij) in order to take gradients. This can be done efficiently using standard implementations of iterative methods like Lanczos. Further implementation details (including tuning of λij) are presented in Section 6.3.
Dual certificate of robustness. The dual formulation is also useful because any value of the dual is an upper bound on the optimal value of the primal. Specifically, if (W [t], V [t], c[t]) are the parameters at iteration t of training, then
f ij(A(x)) ≤ f(x) + 4
[ D · λ+max ( M ij(V [t],W [t])− diag(c[t]ij) ) + 1>max(c[t]ij , 0) ] , (16)
for any attack A. As we train the network, we obtain a quick upper bound on the worst-case adversarial loss directly from the regularization loss, without having to optimize an SDP each time.
5 OTHER UPPER BOUNDS
In Section 3, we described a function f ijSDP that yields an efficient upper bound on the adversarial loss, which we obtained using convex relaxations. One could consider other simple ways to upper bound the loss; we describe here two common ones based on the spectral and Frobenius norms.
Spectral bound: Note that v>(σ(Wx̃) − σ(Wx)) ≤ ‖v‖2‖σ(Wx̃) − σ(Wx)‖2 by CauchySchwarz. Moreover, since σ is contractive, ‖σ(Wx̃) − σ(Wx)‖2 ≤ ‖W (x̃ − x)‖2 ≤ ‖W‖2‖x̃ − x‖2 ≤ √ d‖W‖2, where ‖W‖2 is the spectral norm (maximum singular value) of W . This yields the following upper bound that we denote by fspectral:
f ij(A(x)) ≤ f ijspectral(x) def = f ij(x) + √ d‖W‖2‖Vi − Vj‖2. (17)
This measure of vulnerability to adversarial examples based on the spectral norms of the weights of each layer is considered in Szegedy et al. (2014) and Cisse et al. (2017).
Frobenius bound: For ease in training, often the Frobenius norm is regularized (weight decay) instead of the spectral norm. Since ‖W‖F ≥ ‖W‖2, we get a corresponding upper bound ffrobenius:
f ij(A(x)) ≤ f ijfrobenius(x) = f ij(x) + √ d‖W‖F‖Vi − Vj‖2. (18)
In Section 6, we empirically compare our proposed bound using f ijSDP to these two upper bounds.
6 EXPERIMENTS
We evaluated our method on the MNIST dataset of handwritten digits, where the task is to classify images into one of ten classes. Our results can be summarized as follows: First, in Section 6.1, we show that our certificates of robustness are tighter than those based on simpler methods such as Frobenius and spectral bounds (Section 5), but our bounds are still too high to be meaningful for general networks. Then in Section 6.2, we show that by training on the certificates, we obtain networks with much better bounds and hence meaningful robustness. This reflects an important point: while accurately analyzing the robustness of an arbitrary network is hard, training the certificate jointly leads to a network that is robust and certifiably so. In Section 6.3, we present implementation details, design choices, and empirical observations that we made while implementing our method.
Networks. In this work, we focus on two layer networks. In all our experiments, we used neural networks with m = 500 hidden units, and TensorFlow’s implementation of Adam (Kingma & Ba, 2014) as the optimizer; we considered networks with more hidden units, but these did not substantially improve accuracy. We experimented with both the multiclass hinge loss and cross-entropy. All hyperparameters (including the choice of loss function) were tuned based on the error of the Projected Gradient Descent (PGD) attack (Madry et al., 2017) at = 0.1; we report the hyperparameter settings below. We considered the following training objectives providing 5 different networks:
1. Normal training (NT-NN). Cross-entropy loss and no explicit regularization. 2. Frobenius norm regularization (Fro-NN). Hinge loss and a regularizer λ(‖W‖F + ‖v‖2)
with λ = 0.08.
3. Spectral norm regularization (Spe-NN). Hinge loss and a regularizer λ(‖W‖2 + ‖v‖2) with λ = 0.09.
4. Adversarial training (AT-NN). Cross-entropy with the adversarial loss against PGD as a regularizer, with the regularization parameter set to 0.5. We found that this regularized loss works better than optimizing only the adversarial loss, which is the defense proposed in Madry et al. (2017). We set the step size of the PGD adversary to 0.1, number of iterations to 40, and perturbation size to 0.3.
5. Proposed training objective (SDP-NN). Dual SDP objective described in Equation 15 of Section 4. Implementation details and hyperparameter values are detailed in Section 6.3.
Evaluating upper bounds. Below we will consider various upper bounds on the adversarial loss `Aopt (based on our method, as well as the Frobenius and spectral bounds described in Section 5). Ideally we would compare these to the ground-truth adversarial loss `Aopt , but computing this exactly is difficult. Therefore, we compare upper bounds on the adversarial loss with a lower bound on `Aopt instead. The loss of any attack provides a valid lower bound and we consider the strong Projected Gradient Descent (PGD) attack run against the cross-entropy loss, starting from a random point in B (x), with 5 random restarts. We observed that PGD against hinge loss did not work well, so we used cross-entropy even for attacking networks trained with the hinge loss.
6.1 QUALITY OF THE UPPER BOUND
For each of the five networks described above, we computed upper bounds on the 0-1 loss based on our certificate (which we refer to as the “SDP bound” in this section), as well as the Frobenius and spectral bounds described in Section 5. While Section 4 provides a procedure for efficiently obtaining an SDP bound as a result of training, for networks not trained with our method we need to solve an SDP at the end of training to obtain certificates. Fortunately, this only needs to be done once for every pair of classes. In our experiments, we use the modeling toolbox YALMIP (Löfberg, 2004) with Sedumi (Sturm, 1999) as a backend to solve the SDPs, using the dual form (14); this took roughly 10 minutes per SDP (around 8 hours in total for a given model).
In Figure 2, we display average values of the different upper bounds over the 10, 000 test examples, as well as the corresponding lower bound from PGD. We find that our bound is tighter than the Frobenius and spectral bounds for all the networks considered, but its tightness relative to the PGD lower bound varies across the networks. For instance, our bound is relatively tight on Fro-NN, but unfortunately Fro-NN is not very robust against adversarial examples (the PGD attack exhibits large
error). In contrast, the adversarially trained network AT-NN does appear to be robust to attacks, but our certificate, despite being much tighter than the Frobenius and spectral bounds, is far away from the PGD lower bound. The only network that is both robust and has relatively tight upper bounds is SDP-NN, which was explicitly trained to be both robust and certifiable as described in Section 4; we examine this network and the effects of training in more detail in the next subsection.
6.2 EVALUATING PROPOSED TRAINING OBJECTIVE.
In the previous section, we saw that the SDP bound, while being tighter than simpler upper bounds, could still be quite loose on arbitrary networks. However, optimizing against the SDP certificate seemed to make the certificate tighter. In this section, we explore the effect of different optimization objectives in more detail. First, we plot on a single axis the best upper bound (i.e., the SDP bound) and the lower bound (from PGD) on the adversarial loss obtained with each of the five training objectives discussed above. This is given in Figure 3a.
Neither spectral nor Frobenius norm regularization seems to be helpful for encouraging adversarial robustness—the actual performance of those networks against the PGD attack is worse than the upper bound for SDP-NN against all attacks. This shows that the SDP certificate actually provides a useful training objective for encouraging robustness compared to other regularizers.
Separately, we can ask whether SDP-NN is robust to actual attacks. We explore the robustness of our network in Figure 3b, where we plot the performance of SDP-NN against 3 attacks—the PGD attack from before, the Carlini-Wagner attack (Carlini & Wagner, 2017b) (another strong attack), and the weaker Fast Gradient Sign Method (FGSM) baseline. We see substantial robustness against all 3 attacks, even though our method was not explicitly trained with any of them in mind.
Next, we compare to other bounds reported in the literature. A rough ceiling is given by the network of Madry et al. (2017), which is a relatively large four-layer convolutional network adversarially trained against PGD. While this network has no accompanying certificate of robustness, it was evaluated against a number of attack strategies and had worst-case error 11% at = 0.3. Another set of numbers comes from Carlini et al. (2017), who use formal verification methods to compute Aopt exactly on 10 input examples for a small (72-node) variant of the Madry et al. network. The authors reported to us that this network misclassifies 6 out of 10 examples at = 0.05 (we note that 4 out of 10 of these were misclassified to start with, but 3 of the 4 can also be flipped to a different wrong class with some < 0.07).
At the value = 0.1 for which it was tuned, SDP-NN has error 16% against the PGD attack, and an upper bound of 35% error against any attack. This is substantially better than the small 72-node network, but also much worse than the full Madry et al. network. How much of the latter looseness comes from conservatism in our method, versus the fact that our network has only two layers? We can get some idea by considering the AT-NN network, which was trained similarly to Madry et al., but uses the same architecture as SDP-NN. From Figure 3a, we see that the error of SDP-NN against PGD (16%) is not much worse than that of AT-NN (11%), even though AT-NN was explicitly trained against the PGD attack. This suggests that most of the gap comes from the smaller network depth,
rather than from conservatism in the SDP bound. We are currently in the process of extending our approach to deeper networks, and optimistic about obtaining improved bounds with such networks.
Finally, we compare with the approach proposed in Kolter & Wong (2017) whose work appeared shortly after an initial version of our paper. They provide an upper bound on the adversarial loss using linear programs (LP) followed by a method to efficiently train networks to minimize this upper bound. In order to compare with SDP-NN, the authors provided us with a network with the same architecture as SDP-NN, but trained using their LP based objective. We call this network LP-NN. Table 1 shows that LP-NN and SDP-NN are comparable in terms of their robustness against PGD, and the robustness guarantees that they come with.
Interestingly, the SDP and LP approaches provide vacuous bounds for networks not trained to minimize the respective upper bounds (though these networks are indeed robust). This suggests that these two approaches are comparable, but complementary. Finally, we note that in contrast to this work, the approach of Kolter & Wong (2017) extends to deeper networks, which allows them to train a four-layer CNN with a provable upper bound on adversarial error of 8.4% error.
6.3 IMPLEMENTATION DETAILS
We implemented our training objective in TensorFlow, and implemented λ+max as a custom operator using SciPy’s implementation of the Lanczos algorithm for fast top eigenvector computation; occasionally Lanczos fails to converge due to a small eigen-gap, in which case we back off to a full SVD. We used hinge loss as the classification loss, and decayed the learning rate in steps from 10−3 to 10−5, decreasing by a factor of 10 every 30 epochs. Each gradient step involves computing top eigenvectors for 45 different matrices, one for each pair of classes (i, j). In order to speed up computation, for each update, we randomly pick it and only compute gradients for pairs (it, j), j 6= it, requiring only 9 top eigenvector computations in each step.
For the regularization parameters λij , the simplest idea is to set them all equal to the same value; this leads to the unweighted regularization scheme where λij = λ for all pairs (i, j). We tuned λ to 0.05, which led to reasonably good bounds. However, we observed that certain pairs of classes tended to have larger margins f ij(x) than other classes, which meant that certain label pairs appeared in the maximum of (12) much more often. That led us to consider a weighted regularization scheme with λij = wijλ, where wij is the fraction of training points for which the the label i (or j) appears as the maximizing term in (12). We updated the values of these weights every 20 epochs. Figure 4a compares the PGD lower bound and SDP upper bound for the unweighted and weighted networks. The weighted network is better than the unweighted network for both the lower and upper bounds.
Finally, we saw in Equation 16 of Section 4 that the dual variables cij provide a quick-to-compute certificate of robustness. Figure 4b shows that the certificates provided by these dual variables are very close to what we would obtain by fully optimizing the semidefinite programs. These dual certificates made it easy to track robustness across epochs of training and to tune hyperparameters.
7 DISCUSSION
In this work, we proposed a method for producing certificates of robustness for neural networks, and for training against these certificates to obtain networks that are provably robust against adversaries.
Related work. In parallel and independent work, Kolter & Wong (2017) also provide provably robust networks against `∞ perturbations by using convex relaxations. While our approach uses a single semidefinite program to compute an upper bound on the adversarial loss, Kolter & Wong
(2017) use separate linear programs for every data point, and apply their method to networks of depth up to four. In theory, neither bound is strictly tighter than the other, and our experiments (Table 1) suggest that the two bounds are complementary. Combining the approaches seems to be a promising future direction.
Katz et al. (2017a) and the follow-up Carlini et al. (2017) also provide certificates of robustness for neural networks against `∞ perturbations. That work uses SMT solvers, which are a tool from the formal verification literature. The SMT solver can answer the binary question “Is there an adversarial example within distance of the input x?”, and is correct whenever it terminates. The main drawback of SMT and similar formal verification methods is that they are slow—they have worst-case exponential-time scaling in the size of the network; moreover, to use them during training would require a separate search for each gradient step. Huang et al. (2017) use SMT solvers and are able to analyze state-of-the-art networks on MNIST, but they make various approximations such that their numbers are not true upper bounds.
Bastani et al. (2016) provide tractable certificates but require to be small enough to ensure that the entire `∞ ball around an input lies within the same linear region. For the networks and values of that we consider in our paper, we found that this condition did not hold. Recently, Hein & Andriushchenko (2017) proposed a bound for guaranteeing robustness to `p-norm perturbations, based on the maximum pp−1 -norm of the gradient in the -ball around the inputs. Hein & Andriushchenko (2017) show how to efficiently compute this bound for p = 2, as opposed to our work which focuses on `∞ and requires different techniques to achieve scalability.
Madry et al. (2017) perform adversarial training against PGD on the MNIST and CIFAR-10 datasets, obtaining networks that they suggest are “secure against first-order adversaries”. However, this is based on an empirical observation that PGD is nearly-optimal among gradient-based attacks, and does not correspond to any formal robustness guarantee.
Finally, the notion of a certificate appears in the theory of convex optimization, but means something different in that context; specifically, it corresponds to a proof that a point is near the optimum of a convex function, whereas here our certificates provide upper bounds on non-convex functions. Additionally, while robust optimization (Bertsimas et al., 2011) provides a tool for optimizing objectives with robustness constraints, applying it directly would involve the same intractable optimization for Aopt that we deal with here.
Other approaches to verification. While they have not been explored in the context of neural networks, there are approaches in the control theory literature for verifying robustness of dynamical systems, based on Lyapunov functions (Lyapunov, 1892; 1992). We can think of the activations in a neural network as the evolution of a time-varying dynamical system, and attempt to prove stability around a trajectory of this system (Tedrake et al., 2010; Tobenkin et al., 2011). Such methods typically use sum-of-squares verification (Papachristodoulou & Prajna, 2002; 2005; Parrilo, 2003) and are restricted to relatively low-dimensional dynamical systems, but could plausibly scale to
larger settings. Another approach is to construct families of networks that are provably robust a priori, which would remove the need to verify robustness of the learned model; to our knowledge this has not been done for any expressive model families.
Adversarial examples and secure ML. There has been a great deal of recent work on the security of ML systems; we provide only a sampling here, and refer the reader to Barreno et al. (2010), Biggio et al. (2014a), Papernot et al. (2016b), and Gardiner & Nagaraja (2016) for some recent surveys.
Adversarial examples for neural networks were first discovered by Szegedy et al. (2014), and since then a number of attacks and defenses have been proposed. We have already discussed gradientbased methods as well as defenses based on adversarial training. There are also other attacks based on, e.g., saliency maps (Papernot et al., 2016a), KL divergence (Miyato et al., 2015), and elastic net optimization (Chen et al., 2017); many of these attacks are collated in the cleverhans repository (Goodfellow et al., 2016). For defense, rather than making networks robust to adversaries, some work has focused on simply detecting adversarial examples. However, Carlini & Wagner (2017a) recently showed that essentially all known detection methods can be subverted by strong attacks.
As explained in Barreno et al. (2010), there are a number of different attack models beyond the testtime attacks considered here, based on different attacker goals and capabilities. For instance, one can consider data poisoning attacks, where an attacker modifies the training set in an effort to affect test-time performance. Newsome et al. (2006), Laskov & Šrndic̀ (2014), and Biggio et al. (2014b) have demonstrated poisoning attacks against real-world systems.
Other types of certificates. Certificates of performance for machine learning systems are desirable in a number of settings. This includes verifying safety properties of air traffic control systems (Katz et al., 2017a;b) and self-driving cars (O’Kelly et al., 2016; 2017), as well as security applications such as robustness to training time attacks (Steinhardt et al., 2017). More broadly, certificates of performance are likely necessary for deploying machine learning systems in critical infrastructure such as internet packet routing (Winstein & Balakrishnan, 2013; Sivaraman et al., 2014). In robotics, certificates of stability are routinely used both for safety verification (Lygeros et al., 1999; Mitchell et al., 2005) and controller synthesis (Başar & Bernhard, 2008; Tedrake et al., 2010).
In traditional verification work, Rice’s theorem (Rice, 1953) is a strong impossibility result essentially stating that most properties of most programs are undecidable. Similarly, we should expect that verifying robustness for arbitrary neural networks is hard. However, the results in this work suggest that it is possible to learn neural networks that are amenable to verification, in the same way that it is possible to write programs that can be formally verified. Optimistically, given expressive enough certification methods and model families, as well as strong enough specifications of robustness, one could even hope to train vector representations of natural images with strong robustness properties, thus finally closing the chapter on adversarial vulnerabilities in the visual domain.
Reproducibility. All code, data and experiments for this paper are available on the Codalab platform at https://worksheets.codalab.org/worksheets/ 0xa21e794020bb474d8804ec7bc0543f52/.
Acknowledgements. This work was partially supported by a Future of Life Institute Research Award and Open Philanthrophy Project Award. JS was supported by a Fannie & John Hertz Foundation Fellowship and an NSF Graduate Research Fellowship. We are also grateful to Guy Katz, Zico Kolter and Eric Wong for providing relevant experimental results for comparison, as well as to the anonymous reviewers for useful feedback and references.
A DUALITY
In this section we justify the duality relation (14). Recall that the primal program is
maximize 〈M,P 〉 (19) subject to P 0,diag(P ) ≤ 1.
Rather than taking the dual directly, we first add the redundant constraint tr(P ) ≤ d+m+ 1 (it is redundant because the SDP is in d+m+ 1 dimensions and diag(P ) ≤ 1). This yields
maximize 〈M,P 〉 (20) subject to P 0,diag(P ) ≤ 1, tr(P ) ≤ d+m+ 1.
We now form the Lagrangian for the constraints diag(P ) ≤ 1, leaving the other two constraints as-is. This yields the equivalent optimization problem
maximize min c≥0 〈M,P 〉+ c>(1− diag(P )) (21)
subject to P 0, tr(P ) ≤ d+m+ 1.
Now, we apply minimax duality to swap the order of min and max; the value of (21) is thus equal to
minimize max P 0,
tr(P )≤d+m+1
〈M,P 〉+ c>(1− diag(P )) (22)
subject to c ≥ 0.
The inner maximum can be simplified as 1>c+(d+m+1)· (
max P 0,tr(P )≤1
〈M−diag(c), P 〉 ) = 1>c+(d+m+1)λ+max(M−diag(c)). (23)
Therefore, (22) simplifies to
minimize 1>c+ (d+m+ 1)λ+max(M − diag(c)) (24) subject to c ≥ 0.
This is almost the form given in (14), except that c is constrained to be non-negative and we have 1>c instead of 1>max(c, 0). However, note that for the λ+max term, it is always better for c to be larger; therefore, replacing c with max(c, 0) means that the optimal value of c will always be nonnegative, thus allowing us to drop the c ≥ 0 constraint and optimize c in an unconstrained manner. This finally yields the claimed duality relation (14). | 1. What is the focus of the paper regarding neural network security?
2. What are the strengths and weaknesses of the proposed approach in deriving an upper bound on adversarial perturbation?
3. How does the reviewer assess the applicability and computational efficiency of the method? | Review | Review
This paper derived an upper bound on adversarial perturbation for neural networks with one hidden layer. The upper bound is derived via (1) theorem of middle value; (2) replace the middle value by the maximum (eq 4); (3) replace the maximum of the gradient value (locally) by the global maximal value (eq 5); (4) this leads to a non-convex quadratic program, and then the authors did a convex relaxation similar to maxcut to upper bound the function by a SDP, which then can be solved in polynomial time.
The main idea of using upper bound (as opposed to lower bound) is reasonable. However, I find there are some limitations/weakness of the proposed method:
1. The method is likely not extendable to more complicated and more practical networks, beyond the ones discussed in the paper (ie with one hidden layer)
2. SDP while tractable, would still require very expensive computation to solve exactly.
3. The relaxation seems a bit loose - in particular, in above step 2 and 3, the authors replace the gradient value by a global upper bound on that, which to me seems can be pretty loose. |
ICLR | Title
Privacy Protected Multi-Domain Collaborative Learning
Abstract
Unsupervised domain adaptation (UDA) aims to transfer knowledge from one or more well-labeled source domains to improve model performance on the differentyet-related target domain without any annotations. However, existing UDA algorithms fail to bring any benefits to source domains and neglect privacy protection during data sharing. With these considerations, we define Privacy Protected MultiDomain Collaborative Learning (PMDCL) and propose a novel Mask-Driven Federated Network (MDFNet) to reach a “win-win” deal for multiple domains with data protected. First, each domain is armed with individual local model via a mask disentangled mechanism to learn domain-invariant semantics. Second, the centralized server refines the global invariant model by integrating and exchanging local knowledge across all domains. Moreover, adaptive self-supervised optimization is deployed to learn discriminative features for unlabeled domains. Finally, theoretical studies and experimental results illustrate rationality and effectiveness of our method on solving PMDCL.
1 INTRODUCTION
Unsupervised domain adaptation (UDA) (Tang et al., 2020; Jiang et al., 2020; Zhang et al., 2020) attempts to transfer knowledge from well-labeled source domains to annotate unlabeled target samples, which have significant domain discrepancy with source domains due to the various data collection manners and devices. Recent explorations (Na et al., 2021; Dong et al., 2020) suppose the model to be trained has access to both source and target data during the training stage. With such basic assumption, it becomes possible to measure the domain discrepancy and adopt metric-based solutions (Kang et al., 2020) or domain confusion (Cui et al., 2020; Tang & Jia, 2020) to generate domain-invariant features. However, the hypothesis violates the concerns of practical application on privacy protection, and fails to be deployed to small devices with limited storage.
This requirement motivates source-free domain adaptation (SFDA), where the source-supervised model is available to assist the target domain without any source data (Liang et al., 2020; Li et al., 2020; Kundu et al., 2020). Generally, SFDA either adapts target samples to source-like ones (Liang et al., 2020) or generates fake source samples from source-model by subsequently taking UDA strategies (Kurmi et al., 2021). To improve the training efficiency, FADA (Peng et al., 2020) employs a federated learning paradigm (Karimireddy et al., 2020; Chen et al., 2020) by allocating the target domain on a centralized server while keeping multiple source ones as clients. However, this approach is vulnerable to attacks as the source features transition to target domain. Further, these domain adaptation works ignore the improvement of model generalization on source domain, which is inconsistent with requirement of reality. For example, the long-standing hospitals already have well-annotated patients’ data, while other newly-built hospitals just collected data without annotation, which need help from long-standing hospitals with well-annotated data due to the huge labeling cost. Besides, with geographical restriction, different hospitals only record their local patients’ data resulting in various population statistics, causing model bias for long-standing hospitals.
Inspired by the above observation, we introduce a more practical scenario called Privacy Protected Multi-Domain Collaborative Learning (P2MDCL) (shown in Figure 1). Specifically, P2MDCL assumes that the well-annotated source and unlabeled target domains are distributed across different clients and there exists a global server merely communicating with each client and integrating the received model parameters from clients. Finally, the server broadcasts the consensus model to all
clients for their use to reach the win-win deal. The key challenge for P2MDCL is to learn a more generic model by solving two core issues: 1) how to achieve domain alignment during iterative communication; and 2) how to enhance discriminative feature learning.
In this paper, we propose a novel Mask-Driven Federated Network (MDFNet) to address P2MDCL. First, our MDFNet introduces two orthogonal masks following high-level features in each client to activate domain-invariant and domain-specific semantics respectively. In practice, we minimize the confusion of these two masks to achieve high-quality feature separation and semantic complementary. Second, the unlabeled target client adopts adaptive self-supervised optimization to learn more discrimina-
tive representations via pseudo labels generation. Finally, MDFNet adopts a progressive weighting scheme to balance the effect of each client in model integration on the server, which discoveries more knowledge of the labeled client to adjust the model of unlabeled client during the initial communication rounds, then the mature unlabeled client model also yields positive effect on the feature learning of labeled client. The main contributions of our work are summarized as:
• First, we are the pioneers to take into account the “win-win” and privacy requirements under unsupervised domain adaptation scenarios by introducing Privacy Protected MultiDomain Collaborative Learning (P2MDCL).
• Second, we propose an effective algorithm MDFNet to fight off the domain shift in a federated training mechanism, which reaches the win-win deal for all involved domains.
• Finally, we derive the generalized error bound for our method, which theoretically testifies the rationality of MDFNet. Moreover, extensive experimental results and analysis empirically illustrate the effectiveness of our method on solving P2MDCL.
2 RELATED WORK
Domain Adaptation. Unsupervised domain adaptation (Cui et al., 2020) attempts to build a model with well-labeled source and unlabeled target data at hand, by mitigating the domain mismatch. Along this line, the recent explorations mainly adopt discrepancy metric-based method (Yan et al., 2017; Tzeng et al., 2014) and adversarial training scheme (Zhang et al., 2019; Tzeng et al., 2017) to learn domain-invariant features. Although these solutions effectively reduce the influence of domain discrepancy, the practical applications difficultly permit the co-existence of source and target data due to the limited storage of small device and data privacy. The demand stimulates the development of source-free domain adaptation (Liang et al., 2020; Kurmi et al., 2021), which merely provides the well-trained source model for knowledge adaption on target domain. In addition, Peng et al. (2020) respectively considers target domain and multiple source domains as the centralized server and clients and adopts federated learning fashion to achieve domain adaptation with multiple discriminators, which is vulnerable to the attack due to the source and target features transmission into the discriminators in the centralized target domain. Even though these strategies actually achieve the comparable transferring ability with the UDA solutions, empirical studies illustrate the current domain adaptation techniques fail to learn a generalized model for source and target domains. Alternatively, they only focus on the improvement of target performance, yet neglecting any benefit to source domain. To this end, this paper posts a novel and practical scenario privacy protected multidomain collaborative learning (P2MDCL), where source and target domains are both regarded as clients independently communicating with the server which produces and broadcasts the consensus model to clients for their use.
Federated Learning (FL). FL allows multi-clients collaboratively to complete the same task without data currency across clients (Yang et al., 2019). Along with this concept, recent works mainly focus on semi-supervised scenario (FSSL) where FedMatch (Jeong et al., 2021) allocates unlabeled
data on client side and labeled data in the server while FedIRM (Liu et al., 2021) only deploys them on various clients. But they both assume the instances across all client are sampled from the identical distribution. Moreover, Smith et al. (2017); Liu et al. (2020) explore FSSL with non-i.i.d by supposing each client contains several well-annotated instances for training. Differently, our considered P2MDCL closely approximates the reality, which involves several clients without any annotations and exists significant domain discrepancy across all clients.
3 THE PROPOSED METHOD
3.1 PROBLEM DEFINITION AND MOTIVATION
The P2MDCL scenario assumes there are L well-annotated source clients Dli = {(xl(i)j , y l (i)j)} nli j=1 (i ∈ {1, · · · , L}) and U unlabeled target clients Duk = {xu(k)j} nuk j=1 (k ∈ {L+ 1, · · · , L+ U}), where x and y denote an input sample and its ground-truth label, respectively. The instances of these clients come from different distributions but share the identical category space and clients are not allowed to exchange private data with each other. Akin to federated learning, the additional global server in P2MDCL collects and assembles all clients’ network parameters to form the consensus model. The main motivation of P2MDCL is addressing the negative effect of insufficient training samples in Dli and label shortage in Duk to reach a “win-win” deal across all clients. We face two challenges to solve P2MDCL: 1) how to reduce the significant distribution discrepancy while protecting data privacy and 2) how to learn more generic and discriminative representations from unlabeled target clients. To this end, this work proposes an effective Mask-Driven Federated Network (MDFNet), which deploys mask-driven disentanglement to locally seek domain-specific/invariant features, and explores the adaptive self-supervised optimization to promote the discriminative ability of unlabeled target clients.
3.2 MASK-DRIVEN DISENTANGLEMENT
Feature separation is a commonly-used strategy in domain adaptation to disentangle latent representation into domain-specific features and domain-invariant ones (Bousmalis et al., 2016; Peng et al., 2019). However, they typically develop two separated networks to extract the corresponding features, which increase storage burden for each local device with insufficient computational resources. Peng et al. (2019) points out the high-level neurons from feature extractor actually involve domain-
specific and invariant knowledge. Inspired by (Chattopadhyay et al., 2020), we explore the binary mask to achieve feature disentanglement by activating the interested neurons.
For the brevity, we omit the symbols l/u and (k) in the following illustration. As Figure 2 shows, each client of our MDFNet contains a basic feature encoder parameterized θe mapping the raw input into the hidden space via gi = θe(xi) ∈ Rd. Subsequently, two additional parameters m̂s, m̂I ∈ Rd are introduced into the local network and activated to form the mask probabilities by using the sigmoid function σ(·) to get ms = σ(m̂s) and mI = σ(m̂I). For each feature gi, based on the mask probabilities, we sample the binary domain-specific and invariant masks (msi ,m I i ∈ {0, 1}d) from the Bernoulli distributions. To this end, we obtain the domain specific and invariant features with the element-wise multiplication⊗ over binary masks and features, i.e., gsi = msi ⊗gi and gIi = mIi ⊗gi. Moreover, we adopt three strategies to achieve high-quality feature separation. Concretely, each client firstly minimizes the semantic overlap between gIi and g s i to store complementary information in them. Motivated by Rahman & Wang (2016), we design the following soft-interactive loss as:
Ls = ∑
i 〈gsi , gIi 〉 sum(gsi + g I i − gsi ⊗ gIi ) , (1)
where 〈, 〉 means the inner product of two feature vectors, and sum(·) represents the sum of all elements for a vector. This approximately reflects the information overlap of two mask distributions. The minimization of soft-interactive loss gradually increases the difference between msi and m I i which activate different neurons. Similar to DSN (Bousmalis et al. (2016)), each client also develops the individual classifier θc(·) taking domain-invariant features as input to the category probability distribution θc(gIj ). The cross-entropy loss between ground-truth and prediction intensifies the discriminnative ability of domain-invariant features. On the other hand, we also feed the combination of gsi and g I i into decoder θd(·) to reconstruct the original input with Lr = ∑ i ‖θd(gsi , gIi ) − xi‖22. Thus, the overall loss function of mask-driven disentanglement for labeled clients is formulated as:
min θc,θe,θd,m̂s,m̂I
Llo = ∑
i −yi log
( θc(g I i ) ) + Lr + Ls, (2)
where we actually adopt straight-through estimator (Bengio et al., 2013) to progressively optimize m̂s and m̂I instead of the discrete binary masks leading to the invalid back-propagation.
3.3 ADAPTIVE SELF-SUPERVISED OPTIMIZATION
Due to the availability of annotation in the well-labeled clients, we can easily calibrate the predicted category distribution to generate discriminative features by using the supervision of ground-truth. However, we cannot directly adopt the supervised learning manner to optimize the model in unlabeled clients with the absence of annotation. Inspired by the successful application of pseudo-label on solving UDA issue (Xie et al., 2018; Gu et al., 2020; Liang et al., 2019; Morerio et al., 2020), we thus propose the adaptive clustering optimization module to gradually produce the pseudo-label as “ground-truth” supervision.
Specifically, after each round of communication, the unlabeled client first receives the model broadcast from the server and uses it to initialize the parameters of θe, θd, θc, m̂s, m̂I . Before further optimizing, the client annotates its local data with the received global model, i.e., ŷj = arg maxk θc(g I j )k. With the predictions, the initial centroid of each category is computed
as Ok = ∑ j 1(ŷj=k)g I j∑
j 1(ŷj=k) , where 1(·) is the indicator function. Since the server model integrates
knowledge from multiple clients, the domain shift negatively affects the accuracy of inference ŷj . To decrease the influence, we adopt an iterative approach to further update the class centers and pseudo-label with the local data points. The proposed adaptive clustering optimization mainly includes two operations. The first step is to reassign the label for each instance with the spherical K-means (Buchta et al., 2012):
ŷj = arg min k
d̃ ( gIj ,Ok ) = 1
2
( 1−
〈gIj ,Ok〉 ‖gIj ‖ · ‖Ok‖
) . (3)
With the reattached annotations, the second step is to update the class prototype with Ok =∑ j 1(ŷj=k)g I j
‖gIj ‖ . The above two steps are repeated till convergence.
After the adaptive self-supervised optimization, we attain the final pseudo-label for each sample and use them as supervision to optimize the local models. However, not all samples of unlabeled clients would contribute to the parameter sharing due to the domain mismatch, which in turn causes the low reliability and uncertainty for some samples. In this way, we consider samples to be positive and negative samples, identifying their potential benefit for the labeled clients. Therefore, it is crucial to distinguish positive and negative samples by identifying their potential benefit to the labeled clients. To this end, we add additional entropy-minimization (EM) to further improve the certainty of category prediction, reformulate the Eq. (2) for the unlabeled clients as:
min θc,θe,θd,m̂s,m̂I
Luo = ∑
j
( −I(max(θc(gIj )) ≥ σ)yj log(θc(gIj ))−θc(gIj ) log(θc(gIj )) ) +Lr+Ls,
(4) where I(·) denotes the indicator to filter out samples with θc(gIj ) less than a threshold σ, which is set as 0.1 by default throughout our experiments.
3.3.1 FEDERATED TRAINING
The overall training of our MDFNet involves two important procedures: a) local clients training, and b) global sever model integration. The clients and server collaboratively update these steps per communication round and repeat the process until model convergence or reaching the maximum communication rounds.
Independent Client Training. In each round, the server broadcasts the consensus model integrated from the last round to all available clients for the initialization of local models. The well-annotated clients then employ their local data to optimize all the modules for one epoch via Eq. (2), while the clients without labels rely on the adaptive clustering optimization to generate pseudo-labels for their samples and update their models with Eq. (4). The clients will locally store the parameters of domain-specific mask and decoder and use them to initialize the network in the next round.
Model Integration. After the local training, the clients will send their local models (excluding the parameters of domain-specific mask and decoder) to the server, where the models are integrated to achieve consensus. However, adopting pseudo-labels as supervision significantly reduce the reliability of the models, especially in the initial training stages. To avoid the negative effect of pseudo-labels, the server considers providing different weights to labeled and unlabeled clients as: θ̃ = 1−ηrL ∑L i=1 θ(li) + ηr U ∑L+U i=L+1 θ(ui), where θ ∈ {θe, θd, θc, m̂I , m̂s} and ηr = 1−exp(−ρr) 2(1+exp(−ρr)) , r is the round of communication and each round means one epoch, and ρ is set as 10 in experiments.
3.4 GENERALIZED ERROR BOUND ANALYSIS
We firstly define the basic notation and employ them to derive the generalization error bound for P2MDCL from the high-level interpretation and the specific proofs are shown in the supplementary.
Notation. Given the distributions of labeled and unlabeled clients Dli and Duj on the input space X , we have access to the ground-truth labeling function fli : X → {0, 1} for the clients with the annotation, while also have the pseudo labeling function fuj : X → {0, 1} available for the unlabeled clients. A hypothesis is corresponding to a function h : X → {0, 1} with the error, i.e., li(h, fli) := Ex∼Dli [|h(x − fli(x)|] and uj (h, fuj ) := Ex∼Duj [|h(x − fuj (x)|]. Thus, the risk and the empirical risk of hypothesis h on Dli and Duj are respectively represented as li(h), ̂li(h), and uj (h), ̂uj (h). Moreover, we define the H-divergence between two arbitrary distributions D and D′ as dH(D,D ′ ) = 2 supA∈AH |PrD(A) − PrD′ (A)|, where H means the hypothesis class for input space X and AH is the collection of subsets of X that are the support of some hypothesis inH. The symmetric difference space with the hypothesis class is formulated asH∆H = {h(x) ∗ h′(x)|h, h′ ∈ H}, where ∗ denotes the XOR operation. Our model aims to learn a consensus model through the communication between the server and all available clients. Such learning strategy actually attempts to minimize a convex combination of empirical risks over all clients with parameters αi ( ∑L+U i=1 αi = 1) as ̂α = ∑L i=1 αîli(h) +∑L+U
i=L+1 αi ui(h). Similarly, we obtain the weighted combination of the risks over all clients as α(h). In addition, since each client independently trains the model with its specific data, we denote the optimal hypothesises achieving the minimum risk on the labeled and unlabeled clients as h∗li :=
arg minh∈H li(h) and h ∗ uj := arg minh∈H uj (h). With these definitions, it still is intractable to directly deduce the generalized error bound under this scenario. Therefore, we alternatively divide the entire problem into multiple sub-problems and solve them in each client.
Concretely, we first explore the relationship between α and li or uj with Lemma 1. Second, we drive the upper bound of the difference between α and ̂α via Lemma 2. Thus, we easily deduce the generalized error bound of a hypothesis per client in Theorem 1.
Lemma 1. Suppose the h is a hypothesis of classH, for each unlabeled client, we then achieve: | α(h)− uj (h)| ≤ L∑ i=1 αi( 1 2 dH∆H(Dli ,Duj ) + λli) + L+U∑ i=L+1,i6=j αi( 1 2 dH∆H(Dui ,Duj ) + λui),
where λli := li(h ∗) + uj (h ∗) and h∗ is the hypothesis which achieves the minimum risk on Dli and Duj , and λui similarly means the risk of optimal hypothesis on the mixture of Dui and Duj . Akin to unlabeled clients, we also derive the analogous inequality in clients with ground-truth as:
| α(h)− lj (h)| ≤ L∑
i=1,i6=j
αi( 1
2 dH∆H(Dli ,Dlj ) + λli) + L+U∑ i=L+1 αi( 1 2 dH∆H(Dui ,Dlj ) + λui),
where λli is the risk of optimal hypothesis of Dli and Dlj , and λui := ui(h∗) + lj (h∗). Lemma 2. Given a hypothesis spaceH of VC-dimension d, if a random sample of size n is generated by selecting nβj data points from Dlj or Duj , and annotating them through flj and fuj , then with probability at least 1− δ, ∀h ∈ H, we have:
|̂α(h)− α(h)| ≤ √∑L+U j=1 α2j βj √ d log(2n)− log δ 2n .
Theorem 1. Suppose given nβi labeled instances from client Dli for i = 1, · · · , L, and nβj unlabeled instances from client Duj in a federated learning system, we define ĥ = arg minh∈H ̂α(h), h∗li := arg minh∈H li(h) and h ∗ uj := arg minh∈H uj (h). Then, ∀αi ∈ R +, ∑L+U i=1 αi = 1, with probability at least 1− δ over the choice of samples from each client:
uj (ĥ) ≤ uj (h∗uj ) + 2 √∑L+U j=1 α2j βj √ d log(2n)− log δ 2n
+ 2 (∑L
i=1 αi(
1 2 dH∆H(Dli ,Duj ) + λli) + ∑L+U i=L+1,i6=j αi( 1 2 dH∆H(Dui ,Duj ) + λui) ) .
For the annotated client, we can achieve the similar inequality. From Theorem 1, we explicitly observe the risk of a hypothesis with the federated training manner on the client is determined by three components: the error of the optimal hypothesis h∗uj on its own samples, the VC-dimension constraint and the distribution discrepancy across various clients. To effectively reduce the risk of a hypothesis on all clients, we should not only learn the discriminative features by the independent client training, but also attempt to solve the domain shift with the constraint of data privacy.
4 EXPERIMENTS
4.1 EXPERIMENTAL SETTINGS
Datasets. Image-CLEF collects visual signals from three domains: Caltech-256 (C), ImageNet ILSVRC 2012 (I) and Pascal VOC 2012 (P) with the same number of samples. Concretely, arbitrary subset includes 600 images evenly distributed in 12 categories. Office-Home (Venkateswara et al., 2017) consists of four domains: Artistic images (Ar, 2,183), Clip Art (Cl, 4,365), Product images (Pr, 4,439) and Real-World images (Rw, 4,357), which share the identical 65 object categories. To verify the “win-win” deal, we randomly split the original data of labeled client into the training and test sets evenly, and repeat this operation for ten times1.
1We further report the comparison under the original protocol of SHOT in supplemental materials, where all source samples are used for training and the evaluation is only on target domain.
Baselines. To the best of our knowledge, we are the pioneers to consider P2MDCL scenario, and we aim to assess if our algorithm can learn a server model with higher generalization ability to enhance the performance across all clients via federated training. To explicitly testify the generalization of model on each client, this section focuses on P2MDCL with a labeled client and an unlabeled one. Noted that we report the P2MDCL with more clients in supplementary material. Since this scenario is similar with UDA and source-free DA, we not only select CDAN (Long et al., 2017) and SRDC (Tang et al., 2020) achieving the state-of-the-art results on UDA problem as the benchmarks and also regard the SHOT (Liang et al., 2020) as one important competitor. In addition, we also consider the source-only method merely training the model on the labeled client. For the mentioned baselines, we use their released code with the suggested parameters to carry out each task for ten times
Evaluation Metric. In terms of the data organization, the labeled client includes the training and testing sets without any overlap, while all samples of the unlabeled client participant the model training and evaluation. For UDA and source-free solutions, the training set of labeled client is considered as the source domain and the unlabeled client servers as the target domain. During the test stage, the final model learned by each method is not only evaluated on the test set of labeled client with the corresponding source accuracy ACCs but also tested on the unlabeled client with the target accuracy ACCt. Moreover, to comprehensively reflect the generalization of model, we adopt the Harmonic Mean (HM) (Dixon & Chapman, 1980) defined as HM = 2×ACCs×ACCtACCs+ACCt .
Implementation Details. We implement our MDFNet with Pytorch as platform. The encoder of each client includes ResNet-50 pre-trianed on ImageNet dataset (Krizhevsky et al., 2012) without the last FC layer and two new additional FC layers (2,048→512→128) followed the ResNet-50. The decoder consists of two FC layers (256→512→2,048) and the classifier only includes one FC layer. The dimensions of m̂s and m̂I are both 2,048. For the training period, we fix the parameters of the pre-trained ResNet-50 and adopt the stochastic gradient descent (SGD) as the optimizer with momentum 0.9. Following (Zhang et al., 2019), the learning rate is adjusted by ζr = ζ0(1+10r)0.75 where ζ0 = 0.01 and r is the communication round. The code is available in the supplementary.
4.2 RESULT ANALYSIS
Table 1 and Table 2 report the average image recognition results in terms of random data split and various model training. According to the results, we achieve four meaningful and interesting conclusions as below.
First, compared with others, the model learned by our training strategy achieves the best classification accuracy, when evaluated on the test set of labeled and unlabeled clients. In terms of the average harmonic metric, our MDFNet performs better than the second best SHOT by 3.4% on Office-Home. It illustrates that even if the data privacy hinders the currency of knowledge between these two clients, our method still employs the federated
training paradigm to gradually eliminate the domain shift across different clients to improve the generalization of model. Second, although the UDA based solutions and SHOT effectively facilitates the well-trained source models to adapt the data distribution of unlabeled client, the progressive adaptation discards considerable source knowledge and results in performance degradation on the test set of labeled client. For instance, CDAN achieves better average accuracy on
unlabeled client with office-home than source-only method, i.e., 61.42 v.s. 57.22%. However, such improvement heavily affects the generalization of CDAN on the labeled client, i.e., CDAN (75.75%) vs Src-only (85.31%). Different from CDAN, our method even attains the more improvement than the source-only method on the test set, especially for the task P→I of Image-CLEF dataset, where our MDFNet surpasses the source-only method by 6.9%. Thirdly, even though SHOT and MDFNet both protect the data privacy of source domains, our MDFNet learns a better hypothesis with lower error on the unlabeled client. Concretely, for the task Ar→Rw, when assessed on the unlabeled client, our method fights off SHOT by 4.5%, which means our training manner captures more knowledge from the labeled client via the frequent communication between server and clients to improve the discriminative ability of model on recognizing unlabeled instances. Finally, we find that all methods achieve higher classification accuracy on unlabeled clients than that of labeled clients for tasks P→C and P→I. Specifically, with P as the source domain, the well-trained Src-Only model achieves better performance on the target domain than that of source test set. The main reason lies in the fact that the samples of P domain lie in a more diverse distribution within class, which makes it difficult to learn a discriminative model to recognize its images. Concretely, P domain includes many images with multiple objects but one single label. For example, one image of ”bird” class in P domain involves bird and dog, and another image includes bird and person. However, there are almost no such multi-object images in C and I domains. Moreover, the same class in P domain has more animals than that of C or I domain. For instance, besides several common birds as C and I domains, the bird class of P domain also consists chicken and ostrich, etc. Thus, the well-trained source model with P domain can easily classify the images of C and I domains, but difficultly recognize its source test set.
4.3 EMPIRICAL ANALYSIS
Feature Visualization & Confusion Matrix. Our MDFNet aims to eliminate the distribution discrepancy across different clients and improve the generalization of model with the data privacy protection. Thus, we extract the hidden features from Src-only, SHOT and MDFNet on task Ar→Rw
and follow (Zhang et al., 2019) to draw the the feature embedding of the test samples of labeled client and those of unlabeled client in 2-D canvas. From Figure 3, we notice that our MDFNet achieves better alignment across these two clients and significantly promotes the discriminative ability of model as class boundaries are explicit among various categories. Moreover, we utilize the confusion matrix to analyse the model performance on the test set of labeled client (P domain of Image-CLEF). As Figure 4 shows, our method accurately distinguishes several similar objects as bike and motorcycle when compared with Src-only, which illustrates our MDFNet transfers the valuable semantic from unlabeled client to assist model recognizing the samples of labeled client.
Ablation Study & Convergence. To reveal the importance of adaptive self-supervised optimization module, we attempt to remove the supervisor of pseudo-label and consider it as a variant (Ours-SSO) of MDFNet and evaluate the model on the unlabeled client. Their comparison is reported in Figure 5 (c) where the variant suffers from the obvious performance degradation, which demonstrates the self-supervised learning based module effectively facilitates the model to learn more discriminative features. In addition, we further adjust the number of training samples in the labeled client (training set/test set = 3:7) and contrast the performances of three methods on the unlabeled client. The result in Figure 5 (a) reports our MDFNet still beats others in most tasks even with insufficient well-annotated samples. Finally, we record the relationship between classification accuracy and the communication round in Figure 5 (b) where the model is assessed on the test set of labeled client and unlabeled client. The performance means MDFNet rapidly achieves the convergence and has no negative effect on recognizing samples of labeled client during adaptation.
5 CONCLUSION
Although UDA based methods effectively avoid performance degradation when applying source knowledge to target domain, the UDA assumption ignores the improvement of model generalization on source domain and conflicts with privacy protection. Thus, this paper formulates these practical demands with domain adaptation as a novel scenario P2MDCL and proposes mask-driven federated network (MDFNet) to address this challenge. Concretely, each individual domain explores mask disentangle mechanism to learn domain-invariant features, and the unlabeled clients exploits adaptive self-supervised optimization to generate high-quality pseudo labels facilitating discriminative feature learning. Moreover, the centralized server refines the global invariant by assembling local knowledge across all domains. Finally, theoretical and experimental analysis demonstrate the rationality and effectiveness of our MDFNet on solving P2MDCL problem. | 1. What is the focus and contribution of the paper regarding privacy-protected multi-domain collaborative filtering?
2. What are the strengths of the proposed approach, particularly in its performance and novelty?
3. What are the weaknesses of the paper, especially regarding the choice of baselines and experimental setup?
4. Do you have any concerns about the auxiliary communication cost introduced by the server?
5. How does the reviewer assess the clarity and quality of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper studies the problem of privacy-protected multi-domain collaborative filtering, in which a "win-win" deal for source and target domains can be achieved. The proposed framework, MDFNet, contains multiple local clients and one global server. In each client, the encoder achieves feature separation, and the decoder constructs original data based on separated features. Experiments on benchmark datasets demonstrate the best performance of the proposed MDFNet compared with baselines.
Review
The positive points are as follows.
The defined problem of privacy-protected multi-domain collaborative filtering is new and interesting. It is different from the existing UDA or SFDA problem.
The proposed methods are evaluated on benchmark datasets, where various settings of source-target domains are tested. The results show promising performance of the proposed MDFNet, on both source-domain performance and target-domain performance. The "win-win" deal is achieved well by the proposed MDFNet.
The paper is overall good-written. The figures are clear and easy to understand.
The negative points are as follows.
The choices of baselines. The authors compared with proposed MDFNet mainly with traditional domain-adaptation methods and ignored federated learning ones such as FADA (although it is discussed in the introduction). This work is at the crossing point of federate learning and domain adaption, while the authors do not fully discuss the literature of federated learning. For example, could we adapt/extend some federated learning methods designed for labeled data such as FADA or to the setting of partly labeled data (no label for target domain)?
The experimental setup. The encoder of each client is designed for feature separation. In the experiments, the encoder of each client is pre-trained on the ImageNet dataset. It seems the encoder design does not appear in baselines, which is not fair since the pertaining serves as prior knowledge of network parameters.
The auxiliary communication cost introduced by the server may be a concern, compared with existing works.
The main results are under the two-domain setting, while the multiple-domain setting is more important (in the current version, the results of the multiple-domain setting are in the supplemental file and not so enough).
Updates after rebuttal It is really nice to see the authors' replies address some of my concerns. I have updated my recommendation score. |
ICLR | Title
Privacy Protected Multi-Domain Collaborative Learning
Abstract
Unsupervised domain adaptation (UDA) aims to transfer knowledge from one or more well-labeled source domains to improve model performance on the differentyet-related target domain without any annotations. However, existing UDA algorithms fail to bring any benefits to source domains and neglect privacy protection during data sharing. With these considerations, we define Privacy Protected MultiDomain Collaborative Learning (PMDCL) and propose a novel Mask-Driven Federated Network (MDFNet) to reach a “win-win” deal for multiple domains with data protected. First, each domain is armed with individual local model via a mask disentangled mechanism to learn domain-invariant semantics. Second, the centralized server refines the global invariant model by integrating and exchanging local knowledge across all domains. Moreover, adaptive self-supervised optimization is deployed to learn discriminative features for unlabeled domains. Finally, theoretical studies and experimental results illustrate rationality and effectiveness of our method on solving PMDCL.
1 INTRODUCTION
Unsupervised domain adaptation (UDA) (Tang et al., 2020; Jiang et al., 2020; Zhang et al., 2020) attempts to transfer knowledge from well-labeled source domains to annotate unlabeled target samples, which have significant domain discrepancy with source domains due to the various data collection manners and devices. Recent explorations (Na et al., 2021; Dong et al., 2020) suppose the model to be trained has access to both source and target data during the training stage. With such basic assumption, it becomes possible to measure the domain discrepancy and adopt metric-based solutions (Kang et al., 2020) or domain confusion (Cui et al., 2020; Tang & Jia, 2020) to generate domain-invariant features. However, the hypothesis violates the concerns of practical application on privacy protection, and fails to be deployed to small devices with limited storage.
This requirement motivates source-free domain adaptation (SFDA), where the source-supervised model is available to assist the target domain without any source data (Liang et al., 2020; Li et al., 2020; Kundu et al., 2020). Generally, SFDA either adapts target samples to source-like ones (Liang et al., 2020) or generates fake source samples from source-model by subsequently taking UDA strategies (Kurmi et al., 2021). To improve the training efficiency, FADA (Peng et al., 2020) employs a federated learning paradigm (Karimireddy et al., 2020; Chen et al., 2020) by allocating the target domain on a centralized server while keeping multiple source ones as clients. However, this approach is vulnerable to attacks as the source features transition to target domain. Further, these domain adaptation works ignore the improvement of model generalization on source domain, which is inconsistent with requirement of reality. For example, the long-standing hospitals already have well-annotated patients’ data, while other newly-built hospitals just collected data without annotation, which need help from long-standing hospitals with well-annotated data due to the huge labeling cost. Besides, with geographical restriction, different hospitals only record their local patients’ data resulting in various population statistics, causing model bias for long-standing hospitals.
Inspired by the above observation, we introduce a more practical scenario called Privacy Protected Multi-Domain Collaborative Learning (P2MDCL) (shown in Figure 1). Specifically, P2MDCL assumes that the well-annotated source and unlabeled target domains are distributed across different clients and there exists a global server merely communicating with each client and integrating the received model parameters from clients. Finally, the server broadcasts the consensus model to all
clients for their use to reach the win-win deal. The key challenge for P2MDCL is to learn a more generic model by solving two core issues: 1) how to achieve domain alignment during iterative communication; and 2) how to enhance discriminative feature learning.
In this paper, we propose a novel Mask-Driven Federated Network (MDFNet) to address P2MDCL. First, our MDFNet introduces two orthogonal masks following high-level features in each client to activate domain-invariant and domain-specific semantics respectively. In practice, we minimize the confusion of these two masks to achieve high-quality feature separation and semantic complementary. Second, the unlabeled target client adopts adaptive self-supervised optimization to learn more discrimina-
tive representations via pseudo labels generation. Finally, MDFNet adopts a progressive weighting scheme to balance the effect of each client in model integration on the server, which discoveries more knowledge of the labeled client to adjust the model of unlabeled client during the initial communication rounds, then the mature unlabeled client model also yields positive effect on the feature learning of labeled client. The main contributions of our work are summarized as:
• First, we are the pioneers to take into account the “win-win” and privacy requirements under unsupervised domain adaptation scenarios by introducing Privacy Protected MultiDomain Collaborative Learning (P2MDCL).
• Second, we propose an effective algorithm MDFNet to fight off the domain shift in a federated training mechanism, which reaches the win-win deal for all involved domains.
• Finally, we derive the generalized error bound for our method, which theoretically testifies the rationality of MDFNet. Moreover, extensive experimental results and analysis empirically illustrate the effectiveness of our method on solving P2MDCL.
2 RELATED WORK
Domain Adaptation. Unsupervised domain adaptation (Cui et al., 2020) attempts to build a model with well-labeled source and unlabeled target data at hand, by mitigating the domain mismatch. Along this line, the recent explorations mainly adopt discrepancy metric-based method (Yan et al., 2017; Tzeng et al., 2014) and adversarial training scheme (Zhang et al., 2019; Tzeng et al., 2017) to learn domain-invariant features. Although these solutions effectively reduce the influence of domain discrepancy, the practical applications difficultly permit the co-existence of source and target data due to the limited storage of small device and data privacy. The demand stimulates the development of source-free domain adaptation (Liang et al., 2020; Kurmi et al., 2021), which merely provides the well-trained source model for knowledge adaption on target domain. In addition, Peng et al. (2020) respectively considers target domain and multiple source domains as the centralized server and clients and adopts federated learning fashion to achieve domain adaptation with multiple discriminators, which is vulnerable to the attack due to the source and target features transmission into the discriminators in the centralized target domain. Even though these strategies actually achieve the comparable transferring ability with the UDA solutions, empirical studies illustrate the current domain adaptation techniques fail to learn a generalized model for source and target domains. Alternatively, they only focus on the improvement of target performance, yet neglecting any benefit to source domain. To this end, this paper posts a novel and practical scenario privacy protected multidomain collaborative learning (P2MDCL), where source and target domains are both regarded as clients independently communicating with the server which produces and broadcasts the consensus model to clients for their use.
Federated Learning (FL). FL allows multi-clients collaboratively to complete the same task without data currency across clients (Yang et al., 2019). Along with this concept, recent works mainly focus on semi-supervised scenario (FSSL) where FedMatch (Jeong et al., 2021) allocates unlabeled
data on client side and labeled data in the server while FedIRM (Liu et al., 2021) only deploys them on various clients. But they both assume the instances across all client are sampled from the identical distribution. Moreover, Smith et al. (2017); Liu et al. (2020) explore FSSL with non-i.i.d by supposing each client contains several well-annotated instances for training. Differently, our considered P2MDCL closely approximates the reality, which involves several clients without any annotations and exists significant domain discrepancy across all clients.
3 THE PROPOSED METHOD
3.1 PROBLEM DEFINITION AND MOTIVATION
The P2MDCL scenario assumes there are L well-annotated source clients Dli = {(xl(i)j , y l (i)j)} nli j=1 (i ∈ {1, · · · , L}) and U unlabeled target clients Duk = {xu(k)j} nuk j=1 (k ∈ {L+ 1, · · · , L+ U}), where x and y denote an input sample and its ground-truth label, respectively. The instances of these clients come from different distributions but share the identical category space and clients are not allowed to exchange private data with each other. Akin to federated learning, the additional global server in P2MDCL collects and assembles all clients’ network parameters to form the consensus model. The main motivation of P2MDCL is addressing the negative effect of insufficient training samples in Dli and label shortage in Duk to reach a “win-win” deal across all clients. We face two challenges to solve P2MDCL: 1) how to reduce the significant distribution discrepancy while protecting data privacy and 2) how to learn more generic and discriminative representations from unlabeled target clients. To this end, this work proposes an effective Mask-Driven Federated Network (MDFNet), which deploys mask-driven disentanglement to locally seek domain-specific/invariant features, and explores the adaptive self-supervised optimization to promote the discriminative ability of unlabeled target clients.
3.2 MASK-DRIVEN DISENTANGLEMENT
Feature separation is a commonly-used strategy in domain adaptation to disentangle latent representation into domain-specific features and domain-invariant ones (Bousmalis et al., 2016; Peng et al., 2019). However, they typically develop two separated networks to extract the corresponding features, which increase storage burden for each local device with insufficient computational resources. Peng et al. (2019) points out the high-level neurons from feature extractor actually involve domain-
specific and invariant knowledge. Inspired by (Chattopadhyay et al., 2020), we explore the binary mask to achieve feature disentanglement by activating the interested neurons.
For the brevity, we omit the symbols l/u and (k) in the following illustration. As Figure 2 shows, each client of our MDFNet contains a basic feature encoder parameterized θe mapping the raw input into the hidden space via gi = θe(xi) ∈ Rd. Subsequently, two additional parameters m̂s, m̂I ∈ Rd are introduced into the local network and activated to form the mask probabilities by using the sigmoid function σ(·) to get ms = σ(m̂s) and mI = σ(m̂I). For each feature gi, based on the mask probabilities, we sample the binary domain-specific and invariant masks (msi ,m I i ∈ {0, 1}d) from the Bernoulli distributions. To this end, we obtain the domain specific and invariant features with the element-wise multiplication⊗ over binary masks and features, i.e., gsi = msi ⊗gi and gIi = mIi ⊗gi. Moreover, we adopt three strategies to achieve high-quality feature separation. Concretely, each client firstly minimizes the semantic overlap between gIi and g s i to store complementary information in them. Motivated by Rahman & Wang (2016), we design the following soft-interactive loss as:
Ls = ∑
i 〈gsi , gIi 〉 sum(gsi + g I i − gsi ⊗ gIi ) , (1)
where 〈, 〉 means the inner product of two feature vectors, and sum(·) represents the sum of all elements for a vector. This approximately reflects the information overlap of two mask distributions. The minimization of soft-interactive loss gradually increases the difference between msi and m I i which activate different neurons. Similar to DSN (Bousmalis et al. (2016)), each client also develops the individual classifier θc(·) taking domain-invariant features as input to the category probability distribution θc(gIj ). The cross-entropy loss between ground-truth and prediction intensifies the discriminnative ability of domain-invariant features. On the other hand, we also feed the combination of gsi and g I i into decoder θd(·) to reconstruct the original input with Lr = ∑ i ‖θd(gsi , gIi ) − xi‖22. Thus, the overall loss function of mask-driven disentanglement for labeled clients is formulated as:
min θc,θe,θd,m̂s,m̂I
Llo = ∑
i −yi log
( θc(g I i ) ) + Lr + Ls, (2)
where we actually adopt straight-through estimator (Bengio et al., 2013) to progressively optimize m̂s and m̂I instead of the discrete binary masks leading to the invalid back-propagation.
3.3 ADAPTIVE SELF-SUPERVISED OPTIMIZATION
Due to the availability of annotation in the well-labeled clients, we can easily calibrate the predicted category distribution to generate discriminative features by using the supervision of ground-truth. However, we cannot directly adopt the supervised learning manner to optimize the model in unlabeled clients with the absence of annotation. Inspired by the successful application of pseudo-label on solving UDA issue (Xie et al., 2018; Gu et al., 2020; Liang et al., 2019; Morerio et al., 2020), we thus propose the adaptive clustering optimization module to gradually produce the pseudo-label as “ground-truth” supervision.
Specifically, after each round of communication, the unlabeled client first receives the model broadcast from the server and uses it to initialize the parameters of θe, θd, θc, m̂s, m̂I . Before further optimizing, the client annotates its local data with the received global model, i.e., ŷj = arg maxk θc(g I j )k. With the predictions, the initial centroid of each category is computed
as Ok = ∑ j 1(ŷj=k)g I j∑
j 1(ŷj=k) , where 1(·) is the indicator function. Since the server model integrates
knowledge from multiple clients, the domain shift negatively affects the accuracy of inference ŷj . To decrease the influence, we adopt an iterative approach to further update the class centers and pseudo-label with the local data points. The proposed adaptive clustering optimization mainly includes two operations. The first step is to reassign the label for each instance with the spherical K-means (Buchta et al., 2012):
ŷj = arg min k
d̃ ( gIj ,Ok ) = 1
2
( 1−
〈gIj ,Ok〉 ‖gIj ‖ · ‖Ok‖
) . (3)
With the reattached annotations, the second step is to update the class prototype with Ok =∑ j 1(ŷj=k)g I j
‖gIj ‖ . The above two steps are repeated till convergence.
After the adaptive self-supervised optimization, we attain the final pseudo-label for each sample and use them as supervision to optimize the local models. However, not all samples of unlabeled clients would contribute to the parameter sharing due to the domain mismatch, which in turn causes the low reliability and uncertainty for some samples. In this way, we consider samples to be positive and negative samples, identifying their potential benefit for the labeled clients. Therefore, it is crucial to distinguish positive and negative samples by identifying their potential benefit to the labeled clients. To this end, we add additional entropy-minimization (EM) to further improve the certainty of category prediction, reformulate the Eq. (2) for the unlabeled clients as:
min θc,θe,θd,m̂s,m̂I
Luo = ∑
j
( −I(max(θc(gIj )) ≥ σ)yj log(θc(gIj ))−θc(gIj ) log(θc(gIj )) ) +Lr+Ls,
(4) where I(·) denotes the indicator to filter out samples with θc(gIj ) less than a threshold σ, which is set as 0.1 by default throughout our experiments.
3.3.1 FEDERATED TRAINING
The overall training of our MDFNet involves two important procedures: a) local clients training, and b) global sever model integration. The clients and server collaboratively update these steps per communication round and repeat the process until model convergence or reaching the maximum communication rounds.
Independent Client Training. In each round, the server broadcasts the consensus model integrated from the last round to all available clients for the initialization of local models. The well-annotated clients then employ their local data to optimize all the modules for one epoch via Eq. (2), while the clients without labels rely on the adaptive clustering optimization to generate pseudo-labels for their samples and update their models with Eq. (4). The clients will locally store the parameters of domain-specific mask and decoder and use them to initialize the network in the next round.
Model Integration. After the local training, the clients will send their local models (excluding the parameters of domain-specific mask and decoder) to the server, where the models are integrated to achieve consensus. However, adopting pseudo-labels as supervision significantly reduce the reliability of the models, especially in the initial training stages. To avoid the negative effect of pseudo-labels, the server considers providing different weights to labeled and unlabeled clients as: θ̃ = 1−ηrL ∑L i=1 θ(li) + ηr U ∑L+U i=L+1 θ(ui), where θ ∈ {θe, θd, θc, m̂I , m̂s} and ηr = 1−exp(−ρr) 2(1+exp(−ρr)) , r is the round of communication and each round means one epoch, and ρ is set as 10 in experiments.
3.4 GENERALIZED ERROR BOUND ANALYSIS
We firstly define the basic notation and employ them to derive the generalization error bound for P2MDCL from the high-level interpretation and the specific proofs are shown in the supplementary.
Notation. Given the distributions of labeled and unlabeled clients Dli and Duj on the input space X , we have access to the ground-truth labeling function fli : X → {0, 1} for the clients with the annotation, while also have the pseudo labeling function fuj : X → {0, 1} available for the unlabeled clients. A hypothesis is corresponding to a function h : X → {0, 1} with the error, i.e., li(h, fli) := Ex∼Dli [|h(x − fli(x)|] and uj (h, fuj ) := Ex∼Duj [|h(x − fuj (x)|]. Thus, the risk and the empirical risk of hypothesis h on Dli and Duj are respectively represented as li(h), ̂li(h), and uj (h), ̂uj (h). Moreover, we define the H-divergence between two arbitrary distributions D and D′ as dH(D,D ′ ) = 2 supA∈AH |PrD(A) − PrD′ (A)|, where H means the hypothesis class for input space X and AH is the collection of subsets of X that are the support of some hypothesis inH. The symmetric difference space with the hypothesis class is formulated asH∆H = {h(x) ∗ h′(x)|h, h′ ∈ H}, where ∗ denotes the XOR operation. Our model aims to learn a consensus model through the communication between the server and all available clients. Such learning strategy actually attempts to minimize a convex combination of empirical risks over all clients with parameters αi ( ∑L+U i=1 αi = 1) as ̂α = ∑L i=1 αîli(h) +∑L+U
i=L+1 αi ui(h). Similarly, we obtain the weighted combination of the risks over all clients as α(h). In addition, since each client independently trains the model with its specific data, we denote the optimal hypothesises achieving the minimum risk on the labeled and unlabeled clients as h∗li :=
arg minh∈H li(h) and h ∗ uj := arg minh∈H uj (h). With these definitions, it still is intractable to directly deduce the generalized error bound under this scenario. Therefore, we alternatively divide the entire problem into multiple sub-problems and solve them in each client.
Concretely, we first explore the relationship between α and li or uj with Lemma 1. Second, we drive the upper bound of the difference between α and ̂α via Lemma 2. Thus, we easily deduce the generalized error bound of a hypothesis per client in Theorem 1.
Lemma 1. Suppose the h is a hypothesis of classH, for each unlabeled client, we then achieve: | α(h)− uj (h)| ≤ L∑ i=1 αi( 1 2 dH∆H(Dli ,Duj ) + λli) + L+U∑ i=L+1,i6=j αi( 1 2 dH∆H(Dui ,Duj ) + λui),
where λli := li(h ∗) + uj (h ∗) and h∗ is the hypothesis which achieves the minimum risk on Dli and Duj , and λui similarly means the risk of optimal hypothesis on the mixture of Dui and Duj . Akin to unlabeled clients, we also derive the analogous inequality in clients with ground-truth as:
| α(h)− lj (h)| ≤ L∑
i=1,i6=j
αi( 1
2 dH∆H(Dli ,Dlj ) + λli) + L+U∑ i=L+1 αi( 1 2 dH∆H(Dui ,Dlj ) + λui),
where λli is the risk of optimal hypothesis of Dli and Dlj , and λui := ui(h∗) + lj (h∗). Lemma 2. Given a hypothesis spaceH of VC-dimension d, if a random sample of size n is generated by selecting nβj data points from Dlj or Duj , and annotating them through flj and fuj , then with probability at least 1− δ, ∀h ∈ H, we have:
|̂α(h)− α(h)| ≤ √∑L+U j=1 α2j βj √ d log(2n)− log δ 2n .
Theorem 1. Suppose given nβi labeled instances from client Dli for i = 1, · · · , L, and nβj unlabeled instances from client Duj in a federated learning system, we define ĥ = arg minh∈H ̂α(h), h∗li := arg minh∈H li(h) and h ∗ uj := arg minh∈H uj (h). Then, ∀αi ∈ R +, ∑L+U i=1 αi = 1, with probability at least 1− δ over the choice of samples from each client:
uj (ĥ) ≤ uj (h∗uj ) + 2 √∑L+U j=1 α2j βj √ d log(2n)− log δ 2n
+ 2 (∑L
i=1 αi(
1 2 dH∆H(Dli ,Duj ) + λli) + ∑L+U i=L+1,i6=j αi( 1 2 dH∆H(Dui ,Duj ) + λui) ) .
For the annotated client, we can achieve the similar inequality. From Theorem 1, we explicitly observe the risk of a hypothesis with the federated training manner on the client is determined by three components: the error of the optimal hypothesis h∗uj on its own samples, the VC-dimension constraint and the distribution discrepancy across various clients. To effectively reduce the risk of a hypothesis on all clients, we should not only learn the discriminative features by the independent client training, but also attempt to solve the domain shift with the constraint of data privacy.
4 EXPERIMENTS
4.1 EXPERIMENTAL SETTINGS
Datasets. Image-CLEF collects visual signals from three domains: Caltech-256 (C), ImageNet ILSVRC 2012 (I) and Pascal VOC 2012 (P) with the same number of samples. Concretely, arbitrary subset includes 600 images evenly distributed in 12 categories. Office-Home (Venkateswara et al., 2017) consists of four domains: Artistic images (Ar, 2,183), Clip Art (Cl, 4,365), Product images (Pr, 4,439) and Real-World images (Rw, 4,357), which share the identical 65 object categories. To verify the “win-win” deal, we randomly split the original data of labeled client into the training and test sets evenly, and repeat this operation for ten times1.
1We further report the comparison under the original protocol of SHOT in supplemental materials, where all source samples are used for training and the evaluation is only on target domain.
Baselines. To the best of our knowledge, we are the pioneers to consider P2MDCL scenario, and we aim to assess if our algorithm can learn a server model with higher generalization ability to enhance the performance across all clients via federated training. To explicitly testify the generalization of model on each client, this section focuses on P2MDCL with a labeled client and an unlabeled one. Noted that we report the P2MDCL with more clients in supplementary material. Since this scenario is similar with UDA and source-free DA, we not only select CDAN (Long et al., 2017) and SRDC (Tang et al., 2020) achieving the state-of-the-art results on UDA problem as the benchmarks and also regard the SHOT (Liang et al., 2020) as one important competitor. In addition, we also consider the source-only method merely training the model on the labeled client. For the mentioned baselines, we use their released code with the suggested parameters to carry out each task for ten times
Evaluation Metric. In terms of the data organization, the labeled client includes the training and testing sets without any overlap, while all samples of the unlabeled client participant the model training and evaluation. For UDA and source-free solutions, the training set of labeled client is considered as the source domain and the unlabeled client servers as the target domain. During the test stage, the final model learned by each method is not only evaluated on the test set of labeled client with the corresponding source accuracy ACCs but also tested on the unlabeled client with the target accuracy ACCt. Moreover, to comprehensively reflect the generalization of model, we adopt the Harmonic Mean (HM) (Dixon & Chapman, 1980) defined as HM = 2×ACCs×ACCtACCs+ACCt .
Implementation Details. We implement our MDFNet with Pytorch as platform. The encoder of each client includes ResNet-50 pre-trianed on ImageNet dataset (Krizhevsky et al., 2012) without the last FC layer and two new additional FC layers (2,048→512→128) followed the ResNet-50. The decoder consists of two FC layers (256→512→2,048) and the classifier only includes one FC layer. The dimensions of m̂s and m̂I are both 2,048. For the training period, we fix the parameters of the pre-trained ResNet-50 and adopt the stochastic gradient descent (SGD) as the optimizer with momentum 0.9. Following (Zhang et al., 2019), the learning rate is adjusted by ζr = ζ0(1+10r)0.75 where ζ0 = 0.01 and r is the communication round. The code is available in the supplementary.
4.2 RESULT ANALYSIS
Table 1 and Table 2 report the average image recognition results in terms of random data split and various model training. According to the results, we achieve four meaningful and interesting conclusions as below.
First, compared with others, the model learned by our training strategy achieves the best classification accuracy, when evaluated on the test set of labeled and unlabeled clients. In terms of the average harmonic metric, our MDFNet performs better than the second best SHOT by 3.4% on Office-Home. It illustrates that even if the data privacy hinders the currency of knowledge between these two clients, our method still employs the federated
training paradigm to gradually eliminate the domain shift across different clients to improve the generalization of model. Second, although the UDA based solutions and SHOT effectively facilitates the well-trained source models to adapt the data distribution of unlabeled client, the progressive adaptation discards considerable source knowledge and results in performance degradation on the test set of labeled client. For instance, CDAN achieves better average accuracy on
unlabeled client with office-home than source-only method, i.e., 61.42 v.s. 57.22%. However, such improvement heavily affects the generalization of CDAN on the labeled client, i.e., CDAN (75.75%) vs Src-only (85.31%). Different from CDAN, our method even attains the more improvement than the source-only method on the test set, especially for the task P→I of Image-CLEF dataset, where our MDFNet surpasses the source-only method by 6.9%. Thirdly, even though SHOT and MDFNet both protect the data privacy of source domains, our MDFNet learns a better hypothesis with lower error on the unlabeled client. Concretely, for the task Ar→Rw, when assessed on the unlabeled client, our method fights off SHOT by 4.5%, which means our training manner captures more knowledge from the labeled client via the frequent communication between server and clients to improve the discriminative ability of model on recognizing unlabeled instances. Finally, we find that all methods achieve higher classification accuracy on unlabeled clients than that of labeled clients for tasks P→C and P→I. Specifically, with P as the source domain, the well-trained Src-Only model achieves better performance on the target domain than that of source test set. The main reason lies in the fact that the samples of P domain lie in a more diverse distribution within class, which makes it difficult to learn a discriminative model to recognize its images. Concretely, P domain includes many images with multiple objects but one single label. For example, one image of ”bird” class in P domain involves bird and dog, and another image includes bird and person. However, there are almost no such multi-object images in C and I domains. Moreover, the same class in P domain has more animals than that of C or I domain. For instance, besides several common birds as C and I domains, the bird class of P domain also consists chicken and ostrich, etc. Thus, the well-trained source model with P domain can easily classify the images of C and I domains, but difficultly recognize its source test set.
4.3 EMPIRICAL ANALYSIS
Feature Visualization & Confusion Matrix. Our MDFNet aims to eliminate the distribution discrepancy across different clients and improve the generalization of model with the data privacy protection. Thus, we extract the hidden features from Src-only, SHOT and MDFNet on task Ar→Rw
and follow (Zhang et al., 2019) to draw the the feature embedding of the test samples of labeled client and those of unlabeled client in 2-D canvas. From Figure 3, we notice that our MDFNet achieves better alignment across these two clients and significantly promotes the discriminative ability of model as class boundaries are explicit among various categories. Moreover, we utilize the confusion matrix to analyse the model performance on the test set of labeled client (P domain of Image-CLEF). As Figure 4 shows, our method accurately distinguishes several similar objects as bike and motorcycle when compared with Src-only, which illustrates our MDFNet transfers the valuable semantic from unlabeled client to assist model recognizing the samples of labeled client.
Ablation Study & Convergence. To reveal the importance of adaptive self-supervised optimization module, we attempt to remove the supervisor of pseudo-label and consider it as a variant (Ours-SSO) of MDFNet and evaluate the model on the unlabeled client. Their comparison is reported in Figure 5 (c) where the variant suffers from the obvious performance degradation, which demonstrates the self-supervised learning based module effectively facilitates the model to learn more discriminative features. In addition, we further adjust the number of training samples in the labeled client (training set/test set = 3:7) and contrast the performances of three methods on the unlabeled client. The result in Figure 5 (a) reports our MDFNet still beats others in most tasks even with insufficient well-annotated samples. Finally, we record the relationship between classification accuracy and the communication round in Figure 5 (b) where the model is assessed on the test set of labeled client and unlabeled client. The performance means MDFNet rapidly achieves the convergence and has no negative effect on recognizing samples of labeled client during adaptation.
5 CONCLUSION
Although UDA based methods effectively avoid performance degradation when applying source knowledge to target domain, the UDA assumption ignores the improvement of model generalization on source domain and conflicts with privacy protection. Thus, this paper formulates these practical demands with domain adaptation as a novel scenario P2MDCL and proposes mask-driven federated network (MDFNet) to address this challenge. Concretely, each individual domain explores mask disentangle mechanism to learn domain-invariant features, and the unlabeled clients exploits adaptive self-supervised optimization to generate high-quality pseudo labels facilitating discriminative feature learning. Moreover, the centralized server refines the global invariant by assembling local knowledge across all domains. Finally, theoretical and experimental analysis demonstrate the rationality and effectiveness of our MDFNet on solving P2MDCL problem. | 1. How does the proposed method handle domain adaptation for multiple domains while preserving privacy?
2. Can the approach effectively manage scenarios where the source and target data are imbalanced?
3. What are the key components of the proposed masked federated network, and how do they contribute to the model's performance?
4. How does the centralized server refine the global invariant model, and what role does it play in the overall process?
5. Are there any potential limitations or challenges associated with implementing this privacy-preserving multi-domain collaborative learning approach in real-world scenarios? | Summary Of The Paper
Review | Summary Of The Paper
This paper aims to solve a multi-domain collaborative learning problem with their privacy protected. Different from previous domain-adaptation problem, their goal is to make the model work well on both source and target domain.
Review
This paper aims to train a model which can work well on both source and target domain when protect their privacy. First, a masked federated network is designed to train private source and target data. Two masks are used to extract domain-specific and domain invariant features in a disentangle way. The n the centralized server refines the global invariant model using both domains. When the amount of the source data and target date are imbalance, will the performance of the model be influenced? |
ICLR | Title
Privacy Protected Multi-Domain Collaborative Learning
Abstract
Unsupervised domain adaptation (UDA) aims to transfer knowledge from one or more well-labeled source domains to improve model performance on the differentyet-related target domain without any annotations. However, existing UDA algorithms fail to bring any benefits to source domains and neglect privacy protection during data sharing. With these considerations, we define Privacy Protected MultiDomain Collaborative Learning (PMDCL) and propose a novel Mask-Driven Federated Network (MDFNet) to reach a “win-win” deal for multiple domains with data protected. First, each domain is armed with individual local model via a mask disentangled mechanism to learn domain-invariant semantics. Second, the centralized server refines the global invariant model by integrating and exchanging local knowledge across all domains. Moreover, adaptive self-supervised optimization is deployed to learn discriminative features for unlabeled domains. Finally, theoretical studies and experimental results illustrate rationality and effectiveness of our method on solving PMDCL.
1 INTRODUCTION
Unsupervised domain adaptation (UDA) (Tang et al., 2020; Jiang et al., 2020; Zhang et al., 2020) attempts to transfer knowledge from well-labeled source domains to annotate unlabeled target samples, which have significant domain discrepancy with source domains due to the various data collection manners and devices. Recent explorations (Na et al., 2021; Dong et al., 2020) suppose the model to be trained has access to both source and target data during the training stage. With such basic assumption, it becomes possible to measure the domain discrepancy and adopt metric-based solutions (Kang et al., 2020) or domain confusion (Cui et al., 2020; Tang & Jia, 2020) to generate domain-invariant features. However, the hypothesis violates the concerns of practical application on privacy protection, and fails to be deployed to small devices with limited storage.
This requirement motivates source-free domain adaptation (SFDA), where the source-supervised model is available to assist the target domain without any source data (Liang et al., 2020; Li et al., 2020; Kundu et al., 2020). Generally, SFDA either adapts target samples to source-like ones (Liang et al., 2020) or generates fake source samples from source-model by subsequently taking UDA strategies (Kurmi et al., 2021). To improve the training efficiency, FADA (Peng et al., 2020) employs a federated learning paradigm (Karimireddy et al., 2020; Chen et al., 2020) by allocating the target domain on a centralized server while keeping multiple source ones as clients. However, this approach is vulnerable to attacks as the source features transition to target domain. Further, these domain adaptation works ignore the improvement of model generalization on source domain, which is inconsistent with requirement of reality. For example, the long-standing hospitals already have well-annotated patients’ data, while other newly-built hospitals just collected data without annotation, which need help from long-standing hospitals with well-annotated data due to the huge labeling cost. Besides, with geographical restriction, different hospitals only record their local patients’ data resulting in various population statistics, causing model bias for long-standing hospitals.
Inspired by the above observation, we introduce a more practical scenario called Privacy Protected Multi-Domain Collaborative Learning (P2MDCL) (shown in Figure 1). Specifically, P2MDCL assumes that the well-annotated source and unlabeled target domains are distributed across different clients and there exists a global server merely communicating with each client and integrating the received model parameters from clients. Finally, the server broadcasts the consensus model to all
clients for their use to reach the win-win deal. The key challenge for P2MDCL is to learn a more generic model by solving two core issues: 1) how to achieve domain alignment during iterative communication; and 2) how to enhance discriminative feature learning.
In this paper, we propose a novel Mask-Driven Federated Network (MDFNet) to address P2MDCL. First, our MDFNet introduces two orthogonal masks following high-level features in each client to activate domain-invariant and domain-specific semantics respectively. In practice, we minimize the confusion of these two masks to achieve high-quality feature separation and semantic complementary. Second, the unlabeled target client adopts adaptive self-supervised optimization to learn more discrimina-
tive representations via pseudo labels generation. Finally, MDFNet adopts a progressive weighting scheme to balance the effect of each client in model integration on the server, which discoveries more knowledge of the labeled client to adjust the model of unlabeled client during the initial communication rounds, then the mature unlabeled client model also yields positive effect on the feature learning of labeled client. The main contributions of our work are summarized as:
• First, we are the pioneers to take into account the “win-win” and privacy requirements under unsupervised domain adaptation scenarios by introducing Privacy Protected MultiDomain Collaborative Learning (P2MDCL).
• Second, we propose an effective algorithm MDFNet to fight off the domain shift in a federated training mechanism, which reaches the win-win deal for all involved domains.
• Finally, we derive the generalized error bound for our method, which theoretically testifies the rationality of MDFNet. Moreover, extensive experimental results and analysis empirically illustrate the effectiveness of our method on solving P2MDCL.
2 RELATED WORK
Domain Adaptation. Unsupervised domain adaptation (Cui et al., 2020) attempts to build a model with well-labeled source and unlabeled target data at hand, by mitigating the domain mismatch. Along this line, the recent explorations mainly adopt discrepancy metric-based method (Yan et al., 2017; Tzeng et al., 2014) and adversarial training scheme (Zhang et al., 2019; Tzeng et al., 2017) to learn domain-invariant features. Although these solutions effectively reduce the influence of domain discrepancy, the practical applications difficultly permit the co-existence of source and target data due to the limited storage of small device and data privacy. The demand stimulates the development of source-free domain adaptation (Liang et al., 2020; Kurmi et al., 2021), which merely provides the well-trained source model for knowledge adaption on target domain. In addition, Peng et al. (2020) respectively considers target domain and multiple source domains as the centralized server and clients and adopts federated learning fashion to achieve domain adaptation with multiple discriminators, which is vulnerable to the attack due to the source and target features transmission into the discriminators in the centralized target domain. Even though these strategies actually achieve the comparable transferring ability with the UDA solutions, empirical studies illustrate the current domain adaptation techniques fail to learn a generalized model for source and target domains. Alternatively, they only focus on the improvement of target performance, yet neglecting any benefit to source domain. To this end, this paper posts a novel and practical scenario privacy protected multidomain collaborative learning (P2MDCL), where source and target domains are both regarded as clients independently communicating with the server which produces and broadcasts the consensus model to clients for their use.
Federated Learning (FL). FL allows multi-clients collaboratively to complete the same task without data currency across clients (Yang et al., 2019). Along with this concept, recent works mainly focus on semi-supervised scenario (FSSL) where FedMatch (Jeong et al., 2021) allocates unlabeled
data on client side and labeled data in the server while FedIRM (Liu et al., 2021) only deploys them on various clients. But they both assume the instances across all client are sampled from the identical distribution. Moreover, Smith et al. (2017); Liu et al. (2020) explore FSSL with non-i.i.d by supposing each client contains several well-annotated instances for training. Differently, our considered P2MDCL closely approximates the reality, which involves several clients without any annotations and exists significant domain discrepancy across all clients.
3 THE PROPOSED METHOD
3.1 PROBLEM DEFINITION AND MOTIVATION
The P2MDCL scenario assumes there are L well-annotated source clients Dli = {(xl(i)j , y l (i)j)} nli j=1 (i ∈ {1, · · · , L}) and U unlabeled target clients Duk = {xu(k)j} nuk j=1 (k ∈ {L+ 1, · · · , L+ U}), where x and y denote an input sample and its ground-truth label, respectively. The instances of these clients come from different distributions but share the identical category space and clients are not allowed to exchange private data with each other. Akin to federated learning, the additional global server in P2MDCL collects and assembles all clients’ network parameters to form the consensus model. The main motivation of P2MDCL is addressing the negative effect of insufficient training samples in Dli and label shortage in Duk to reach a “win-win” deal across all clients. We face two challenges to solve P2MDCL: 1) how to reduce the significant distribution discrepancy while protecting data privacy and 2) how to learn more generic and discriminative representations from unlabeled target clients. To this end, this work proposes an effective Mask-Driven Federated Network (MDFNet), which deploys mask-driven disentanglement to locally seek domain-specific/invariant features, and explores the adaptive self-supervised optimization to promote the discriminative ability of unlabeled target clients.
3.2 MASK-DRIVEN DISENTANGLEMENT
Feature separation is a commonly-used strategy in domain adaptation to disentangle latent representation into domain-specific features and domain-invariant ones (Bousmalis et al., 2016; Peng et al., 2019). However, they typically develop two separated networks to extract the corresponding features, which increase storage burden for each local device with insufficient computational resources. Peng et al. (2019) points out the high-level neurons from feature extractor actually involve domain-
specific and invariant knowledge. Inspired by (Chattopadhyay et al., 2020), we explore the binary mask to achieve feature disentanglement by activating the interested neurons.
For the brevity, we omit the symbols l/u and (k) in the following illustration. As Figure 2 shows, each client of our MDFNet contains a basic feature encoder parameterized θe mapping the raw input into the hidden space via gi = θe(xi) ∈ Rd. Subsequently, two additional parameters m̂s, m̂I ∈ Rd are introduced into the local network and activated to form the mask probabilities by using the sigmoid function σ(·) to get ms = σ(m̂s) and mI = σ(m̂I). For each feature gi, based on the mask probabilities, we sample the binary domain-specific and invariant masks (msi ,m I i ∈ {0, 1}d) from the Bernoulli distributions. To this end, we obtain the domain specific and invariant features with the element-wise multiplication⊗ over binary masks and features, i.e., gsi = msi ⊗gi and gIi = mIi ⊗gi. Moreover, we adopt three strategies to achieve high-quality feature separation. Concretely, each client firstly minimizes the semantic overlap between gIi and g s i to store complementary information in them. Motivated by Rahman & Wang (2016), we design the following soft-interactive loss as:
Ls = ∑
i 〈gsi , gIi 〉 sum(gsi + g I i − gsi ⊗ gIi ) , (1)
where 〈, 〉 means the inner product of two feature vectors, and sum(·) represents the sum of all elements for a vector. This approximately reflects the information overlap of two mask distributions. The minimization of soft-interactive loss gradually increases the difference between msi and m I i which activate different neurons. Similar to DSN (Bousmalis et al. (2016)), each client also develops the individual classifier θc(·) taking domain-invariant features as input to the category probability distribution θc(gIj ). The cross-entropy loss between ground-truth and prediction intensifies the discriminnative ability of domain-invariant features. On the other hand, we also feed the combination of gsi and g I i into decoder θd(·) to reconstruct the original input with Lr = ∑ i ‖θd(gsi , gIi ) − xi‖22. Thus, the overall loss function of mask-driven disentanglement for labeled clients is formulated as:
min θc,θe,θd,m̂s,m̂I
Llo = ∑
i −yi log
( θc(g I i ) ) + Lr + Ls, (2)
where we actually adopt straight-through estimator (Bengio et al., 2013) to progressively optimize m̂s and m̂I instead of the discrete binary masks leading to the invalid back-propagation.
3.3 ADAPTIVE SELF-SUPERVISED OPTIMIZATION
Due to the availability of annotation in the well-labeled clients, we can easily calibrate the predicted category distribution to generate discriminative features by using the supervision of ground-truth. However, we cannot directly adopt the supervised learning manner to optimize the model in unlabeled clients with the absence of annotation. Inspired by the successful application of pseudo-label on solving UDA issue (Xie et al., 2018; Gu et al., 2020; Liang et al., 2019; Morerio et al., 2020), we thus propose the adaptive clustering optimization module to gradually produce the pseudo-label as “ground-truth” supervision.
Specifically, after each round of communication, the unlabeled client first receives the model broadcast from the server and uses it to initialize the parameters of θe, θd, θc, m̂s, m̂I . Before further optimizing, the client annotates its local data with the received global model, i.e., ŷj = arg maxk θc(g I j )k. With the predictions, the initial centroid of each category is computed
as Ok = ∑ j 1(ŷj=k)g I j∑
j 1(ŷj=k) , where 1(·) is the indicator function. Since the server model integrates
knowledge from multiple clients, the domain shift negatively affects the accuracy of inference ŷj . To decrease the influence, we adopt an iterative approach to further update the class centers and pseudo-label with the local data points. The proposed adaptive clustering optimization mainly includes two operations. The first step is to reassign the label for each instance with the spherical K-means (Buchta et al., 2012):
ŷj = arg min k
d̃ ( gIj ,Ok ) = 1
2
( 1−
〈gIj ,Ok〉 ‖gIj ‖ · ‖Ok‖
) . (3)
With the reattached annotations, the second step is to update the class prototype with Ok =∑ j 1(ŷj=k)g I j
‖gIj ‖ . The above two steps are repeated till convergence.
After the adaptive self-supervised optimization, we attain the final pseudo-label for each sample and use them as supervision to optimize the local models. However, not all samples of unlabeled clients would contribute to the parameter sharing due to the domain mismatch, which in turn causes the low reliability and uncertainty for some samples. In this way, we consider samples to be positive and negative samples, identifying their potential benefit for the labeled clients. Therefore, it is crucial to distinguish positive and negative samples by identifying their potential benefit to the labeled clients. To this end, we add additional entropy-minimization (EM) to further improve the certainty of category prediction, reformulate the Eq. (2) for the unlabeled clients as:
min θc,θe,θd,m̂s,m̂I
Luo = ∑
j
( −I(max(θc(gIj )) ≥ σ)yj log(θc(gIj ))−θc(gIj ) log(θc(gIj )) ) +Lr+Ls,
(4) where I(·) denotes the indicator to filter out samples with θc(gIj ) less than a threshold σ, which is set as 0.1 by default throughout our experiments.
3.3.1 FEDERATED TRAINING
The overall training of our MDFNet involves two important procedures: a) local clients training, and b) global sever model integration. The clients and server collaboratively update these steps per communication round and repeat the process until model convergence or reaching the maximum communication rounds.
Independent Client Training. In each round, the server broadcasts the consensus model integrated from the last round to all available clients for the initialization of local models. The well-annotated clients then employ their local data to optimize all the modules for one epoch via Eq. (2), while the clients without labels rely on the adaptive clustering optimization to generate pseudo-labels for their samples and update their models with Eq. (4). The clients will locally store the parameters of domain-specific mask and decoder and use them to initialize the network in the next round.
Model Integration. After the local training, the clients will send their local models (excluding the parameters of domain-specific mask and decoder) to the server, where the models are integrated to achieve consensus. However, adopting pseudo-labels as supervision significantly reduce the reliability of the models, especially in the initial training stages. To avoid the negative effect of pseudo-labels, the server considers providing different weights to labeled and unlabeled clients as: θ̃ = 1−ηrL ∑L i=1 θ(li) + ηr U ∑L+U i=L+1 θ(ui), where θ ∈ {θe, θd, θc, m̂I , m̂s} and ηr = 1−exp(−ρr) 2(1+exp(−ρr)) , r is the round of communication and each round means one epoch, and ρ is set as 10 in experiments.
3.4 GENERALIZED ERROR BOUND ANALYSIS
We firstly define the basic notation and employ them to derive the generalization error bound for P2MDCL from the high-level interpretation and the specific proofs are shown in the supplementary.
Notation. Given the distributions of labeled and unlabeled clients Dli and Duj on the input space X , we have access to the ground-truth labeling function fli : X → {0, 1} for the clients with the annotation, while also have the pseudo labeling function fuj : X → {0, 1} available for the unlabeled clients. A hypothesis is corresponding to a function h : X → {0, 1} with the error, i.e., li(h, fli) := Ex∼Dli [|h(x − fli(x)|] and uj (h, fuj ) := Ex∼Duj [|h(x − fuj (x)|]. Thus, the risk and the empirical risk of hypothesis h on Dli and Duj are respectively represented as li(h), ̂li(h), and uj (h), ̂uj (h). Moreover, we define the H-divergence between two arbitrary distributions D and D′ as dH(D,D ′ ) = 2 supA∈AH |PrD(A) − PrD′ (A)|, where H means the hypothesis class for input space X and AH is the collection of subsets of X that are the support of some hypothesis inH. The symmetric difference space with the hypothesis class is formulated asH∆H = {h(x) ∗ h′(x)|h, h′ ∈ H}, where ∗ denotes the XOR operation. Our model aims to learn a consensus model through the communication between the server and all available clients. Such learning strategy actually attempts to minimize a convex combination of empirical risks over all clients with parameters αi ( ∑L+U i=1 αi = 1) as ̂α = ∑L i=1 αîli(h) +∑L+U
i=L+1 αi ui(h). Similarly, we obtain the weighted combination of the risks over all clients as α(h). In addition, since each client independently trains the model with its specific data, we denote the optimal hypothesises achieving the minimum risk on the labeled and unlabeled clients as h∗li :=
arg minh∈H li(h) and h ∗ uj := arg minh∈H uj (h). With these definitions, it still is intractable to directly deduce the generalized error bound under this scenario. Therefore, we alternatively divide the entire problem into multiple sub-problems and solve them in each client.
Concretely, we first explore the relationship between α and li or uj with Lemma 1. Second, we drive the upper bound of the difference between α and ̂α via Lemma 2. Thus, we easily deduce the generalized error bound of a hypothesis per client in Theorem 1.
Lemma 1. Suppose the h is a hypothesis of classH, for each unlabeled client, we then achieve: | α(h)− uj (h)| ≤ L∑ i=1 αi( 1 2 dH∆H(Dli ,Duj ) + λli) + L+U∑ i=L+1,i6=j αi( 1 2 dH∆H(Dui ,Duj ) + λui),
where λli := li(h ∗) + uj (h ∗) and h∗ is the hypothesis which achieves the minimum risk on Dli and Duj , and λui similarly means the risk of optimal hypothesis on the mixture of Dui and Duj . Akin to unlabeled clients, we also derive the analogous inequality in clients with ground-truth as:
| α(h)− lj (h)| ≤ L∑
i=1,i6=j
αi( 1
2 dH∆H(Dli ,Dlj ) + λli) + L+U∑ i=L+1 αi( 1 2 dH∆H(Dui ,Dlj ) + λui),
where λli is the risk of optimal hypothesis of Dli and Dlj , and λui := ui(h∗) + lj (h∗). Lemma 2. Given a hypothesis spaceH of VC-dimension d, if a random sample of size n is generated by selecting nβj data points from Dlj or Duj , and annotating them through flj and fuj , then with probability at least 1− δ, ∀h ∈ H, we have:
|̂α(h)− α(h)| ≤ √∑L+U j=1 α2j βj √ d log(2n)− log δ 2n .
Theorem 1. Suppose given nβi labeled instances from client Dli for i = 1, · · · , L, and nβj unlabeled instances from client Duj in a federated learning system, we define ĥ = arg minh∈H ̂α(h), h∗li := arg minh∈H li(h) and h ∗ uj := arg minh∈H uj (h). Then, ∀αi ∈ R +, ∑L+U i=1 αi = 1, with probability at least 1− δ over the choice of samples from each client:
uj (ĥ) ≤ uj (h∗uj ) + 2 √∑L+U j=1 α2j βj √ d log(2n)− log δ 2n
+ 2 (∑L
i=1 αi(
1 2 dH∆H(Dli ,Duj ) + λli) + ∑L+U i=L+1,i6=j αi( 1 2 dH∆H(Dui ,Duj ) + λui) ) .
For the annotated client, we can achieve the similar inequality. From Theorem 1, we explicitly observe the risk of a hypothesis with the federated training manner on the client is determined by three components: the error of the optimal hypothesis h∗uj on its own samples, the VC-dimension constraint and the distribution discrepancy across various clients. To effectively reduce the risk of a hypothesis on all clients, we should not only learn the discriminative features by the independent client training, but also attempt to solve the domain shift with the constraint of data privacy.
4 EXPERIMENTS
4.1 EXPERIMENTAL SETTINGS
Datasets. Image-CLEF collects visual signals from three domains: Caltech-256 (C), ImageNet ILSVRC 2012 (I) and Pascal VOC 2012 (P) with the same number of samples. Concretely, arbitrary subset includes 600 images evenly distributed in 12 categories. Office-Home (Venkateswara et al., 2017) consists of four domains: Artistic images (Ar, 2,183), Clip Art (Cl, 4,365), Product images (Pr, 4,439) and Real-World images (Rw, 4,357), which share the identical 65 object categories. To verify the “win-win” deal, we randomly split the original data of labeled client into the training and test sets evenly, and repeat this operation for ten times1.
1We further report the comparison under the original protocol of SHOT in supplemental materials, where all source samples are used for training and the evaluation is only on target domain.
Baselines. To the best of our knowledge, we are the pioneers to consider P2MDCL scenario, and we aim to assess if our algorithm can learn a server model with higher generalization ability to enhance the performance across all clients via federated training. To explicitly testify the generalization of model on each client, this section focuses on P2MDCL with a labeled client and an unlabeled one. Noted that we report the P2MDCL with more clients in supplementary material. Since this scenario is similar with UDA and source-free DA, we not only select CDAN (Long et al., 2017) and SRDC (Tang et al., 2020) achieving the state-of-the-art results on UDA problem as the benchmarks and also regard the SHOT (Liang et al., 2020) as one important competitor. In addition, we also consider the source-only method merely training the model on the labeled client. For the mentioned baselines, we use their released code with the suggested parameters to carry out each task for ten times
Evaluation Metric. In terms of the data organization, the labeled client includes the training and testing sets without any overlap, while all samples of the unlabeled client participant the model training and evaluation. For UDA and source-free solutions, the training set of labeled client is considered as the source domain and the unlabeled client servers as the target domain. During the test stage, the final model learned by each method is not only evaluated on the test set of labeled client with the corresponding source accuracy ACCs but also tested on the unlabeled client with the target accuracy ACCt. Moreover, to comprehensively reflect the generalization of model, we adopt the Harmonic Mean (HM) (Dixon & Chapman, 1980) defined as HM = 2×ACCs×ACCtACCs+ACCt .
Implementation Details. We implement our MDFNet with Pytorch as platform. The encoder of each client includes ResNet-50 pre-trianed on ImageNet dataset (Krizhevsky et al., 2012) without the last FC layer and two new additional FC layers (2,048→512→128) followed the ResNet-50. The decoder consists of two FC layers (256→512→2,048) and the classifier only includes one FC layer. The dimensions of m̂s and m̂I are both 2,048. For the training period, we fix the parameters of the pre-trained ResNet-50 and adopt the stochastic gradient descent (SGD) as the optimizer with momentum 0.9. Following (Zhang et al., 2019), the learning rate is adjusted by ζr = ζ0(1+10r)0.75 where ζ0 = 0.01 and r is the communication round. The code is available in the supplementary.
4.2 RESULT ANALYSIS
Table 1 and Table 2 report the average image recognition results in terms of random data split and various model training. According to the results, we achieve four meaningful and interesting conclusions as below.
First, compared with others, the model learned by our training strategy achieves the best classification accuracy, when evaluated on the test set of labeled and unlabeled clients. In terms of the average harmonic metric, our MDFNet performs better than the second best SHOT by 3.4% on Office-Home. It illustrates that even if the data privacy hinders the currency of knowledge between these two clients, our method still employs the federated
training paradigm to gradually eliminate the domain shift across different clients to improve the generalization of model. Second, although the UDA based solutions and SHOT effectively facilitates the well-trained source models to adapt the data distribution of unlabeled client, the progressive adaptation discards considerable source knowledge and results in performance degradation on the test set of labeled client. For instance, CDAN achieves better average accuracy on
unlabeled client with office-home than source-only method, i.e., 61.42 v.s. 57.22%. However, such improvement heavily affects the generalization of CDAN on the labeled client, i.e., CDAN (75.75%) vs Src-only (85.31%). Different from CDAN, our method even attains the more improvement than the source-only method on the test set, especially for the task P→I of Image-CLEF dataset, where our MDFNet surpasses the source-only method by 6.9%. Thirdly, even though SHOT and MDFNet both protect the data privacy of source domains, our MDFNet learns a better hypothesis with lower error on the unlabeled client. Concretely, for the task Ar→Rw, when assessed on the unlabeled client, our method fights off SHOT by 4.5%, which means our training manner captures more knowledge from the labeled client via the frequent communication between server and clients to improve the discriminative ability of model on recognizing unlabeled instances. Finally, we find that all methods achieve higher classification accuracy on unlabeled clients than that of labeled clients for tasks P→C and P→I. Specifically, with P as the source domain, the well-trained Src-Only model achieves better performance on the target domain than that of source test set. The main reason lies in the fact that the samples of P domain lie in a more diverse distribution within class, which makes it difficult to learn a discriminative model to recognize its images. Concretely, P domain includes many images with multiple objects but one single label. For example, one image of ”bird” class in P domain involves bird and dog, and another image includes bird and person. However, there are almost no such multi-object images in C and I domains. Moreover, the same class in P domain has more animals than that of C or I domain. For instance, besides several common birds as C and I domains, the bird class of P domain also consists chicken and ostrich, etc. Thus, the well-trained source model with P domain can easily classify the images of C and I domains, but difficultly recognize its source test set.
4.3 EMPIRICAL ANALYSIS
Feature Visualization & Confusion Matrix. Our MDFNet aims to eliminate the distribution discrepancy across different clients and improve the generalization of model with the data privacy protection. Thus, we extract the hidden features from Src-only, SHOT and MDFNet on task Ar→Rw
and follow (Zhang et al., 2019) to draw the the feature embedding of the test samples of labeled client and those of unlabeled client in 2-D canvas. From Figure 3, we notice that our MDFNet achieves better alignment across these two clients and significantly promotes the discriminative ability of model as class boundaries are explicit among various categories. Moreover, we utilize the confusion matrix to analyse the model performance on the test set of labeled client (P domain of Image-CLEF). As Figure 4 shows, our method accurately distinguishes several similar objects as bike and motorcycle when compared with Src-only, which illustrates our MDFNet transfers the valuable semantic from unlabeled client to assist model recognizing the samples of labeled client.
Ablation Study & Convergence. To reveal the importance of adaptive self-supervised optimization module, we attempt to remove the supervisor of pseudo-label and consider it as a variant (Ours-SSO) of MDFNet and evaluate the model on the unlabeled client. Their comparison is reported in Figure 5 (c) where the variant suffers from the obvious performance degradation, which demonstrates the self-supervised learning based module effectively facilitates the model to learn more discriminative features. In addition, we further adjust the number of training samples in the labeled client (training set/test set = 3:7) and contrast the performances of three methods on the unlabeled client. The result in Figure 5 (a) reports our MDFNet still beats others in most tasks even with insufficient well-annotated samples. Finally, we record the relationship between classification accuracy and the communication round in Figure 5 (b) where the model is assessed on the test set of labeled client and unlabeled client. The performance means MDFNet rapidly achieves the convergence and has no negative effect on recognizing samples of labeled client during adaptation.
5 CONCLUSION
Although UDA based methods effectively avoid performance degradation when applying source knowledge to target domain, the UDA assumption ignores the improvement of model generalization on source domain and conflicts with privacy protection. Thus, this paper formulates these practical demands with domain adaptation as a novel scenario P2MDCL and proposes mask-driven federated network (MDFNet) to address this challenge. Concretely, each individual domain explores mask disentangle mechanism to learn domain-invariant features, and the unlabeled clients exploits adaptive self-supervised optimization to generate high-quality pseudo labels facilitating discriminative feature learning. Moreover, the centralized server refines the global invariant by assembling local knowledge across all domains. Finally, theoretical and experimental analysis demonstrate the rationality and effectiveness of our MDFNet on solving P2MDCL problem. | 1. What is the focus and contribution of the paper regarding federated cross-domain classification?
2. What are the strengths of the proposed approach, particularly in its novel solution, MDFNet?
3. What are the weaknesses of the paper, especially regarding security and communication costs?
4. Do you have any concerns about the methodology or approach used in the paper?
5. Are there any limitations to the practical applicability of the proposed solution? | Summary Of The Paper
Review | Summary Of The Paper
In this paper, the authors identify a new and practical federated cross-domain classification problem called Privacy Protected Multi-Domain Collaborative Learning (P2MDCL), which is illustrated in Figure 1. In particular, the authors design a novel solution called Mask-Driven Federated Network (MDFNet), which is shown in Figure 2, where one domain is modeled as a client with an encoder, a decoder and a classier.
Review
Strengths:
1 The authors identify a new and practical federated cross-domain classification problem called Privacy Protected Multi-Domain Collaborative Learning (P2MDCL). The difference between the studied scenario and the previous works are well described, especially of the issues in transfer learning (cross-domain classification) and federated learning, i.e., drift and privacy.
2 The authors design a novel solution called Mask-Driven Federated Network (MDFNet), for which the authors derive some theoretical bounds.
Weaknesses:
1 The authors do not analyze the security (i.e., protection of the privacy) of the proposed framework.
2 The authors do not analyze the communication cost between each client (i.e., domain) and the server. In a typical federated learning system, the communication cost is a very important issue.
3 The way of using an encoder and a decoder, or a domain-specific part and a domain-independent part are well known in existing cross-domain or transfer learning works. |
ICLR | Title
Privacy Protected Multi-Domain Collaborative Learning
Abstract
Unsupervised domain adaptation (UDA) aims to transfer knowledge from one or more well-labeled source domains to improve model performance on the differentyet-related target domain without any annotations. However, existing UDA algorithms fail to bring any benefits to source domains and neglect privacy protection during data sharing. With these considerations, we define Privacy Protected MultiDomain Collaborative Learning (PMDCL) and propose a novel Mask-Driven Federated Network (MDFNet) to reach a “win-win” deal for multiple domains with data protected. First, each domain is armed with individual local model via a mask disentangled mechanism to learn domain-invariant semantics. Second, the centralized server refines the global invariant model by integrating and exchanging local knowledge across all domains. Moreover, adaptive self-supervised optimization is deployed to learn discriminative features for unlabeled domains. Finally, theoretical studies and experimental results illustrate rationality and effectiveness of our method on solving PMDCL.
1 INTRODUCTION
Unsupervised domain adaptation (UDA) (Tang et al., 2020; Jiang et al., 2020; Zhang et al., 2020) attempts to transfer knowledge from well-labeled source domains to annotate unlabeled target samples, which have significant domain discrepancy with source domains due to the various data collection manners and devices. Recent explorations (Na et al., 2021; Dong et al., 2020) suppose the model to be trained has access to both source and target data during the training stage. With such basic assumption, it becomes possible to measure the domain discrepancy and adopt metric-based solutions (Kang et al., 2020) or domain confusion (Cui et al., 2020; Tang & Jia, 2020) to generate domain-invariant features. However, the hypothesis violates the concerns of practical application on privacy protection, and fails to be deployed to small devices with limited storage.
This requirement motivates source-free domain adaptation (SFDA), where the source-supervised model is available to assist the target domain without any source data (Liang et al., 2020; Li et al., 2020; Kundu et al., 2020). Generally, SFDA either adapts target samples to source-like ones (Liang et al., 2020) or generates fake source samples from source-model by subsequently taking UDA strategies (Kurmi et al., 2021). To improve the training efficiency, FADA (Peng et al., 2020) employs a federated learning paradigm (Karimireddy et al., 2020; Chen et al., 2020) by allocating the target domain on a centralized server while keeping multiple source ones as clients. However, this approach is vulnerable to attacks as the source features transition to target domain. Further, these domain adaptation works ignore the improvement of model generalization on source domain, which is inconsistent with requirement of reality. For example, the long-standing hospitals already have well-annotated patients’ data, while other newly-built hospitals just collected data without annotation, which need help from long-standing hospitals with well-annotated data due to the huge labeling cost. Besides, with geographical restriction, different hospitals only record their local patients’ data resulting in various population statistics, causing model bias for long-standing hospitals.
Inspired by the above observation, we introduce a more practical scenario called Privacy Protected Multi-Domain Collaborative Learning (P2MDCL) (shown in Figure 1). Specifically, P2MDCL assumes that the well-annotated source and unlabeled target domains are distributed across different clients and there exists a global server merely communicating with each client and integrating the received model parameters from clients. Finally, the server broadcasts the consensus model to all
clients for their use to reach the win-win deal. The key challenge for P2MDCL is to learn a more generic model by solving two core issues: 1) how to achieve domain alignment during iterative communication; and 2) how to enhance discriminative feature learning.
In this paper, we propose a novel Mask-Driven Federated Network (MDFNet) to address P2MDCL. First, our MDFNet introduces two orthogonal masks following high-level features in each client to activate domain-invariant and domain-specific semantics respectively. In practice, we minimize the confusion of these two masks to achieve high-quality feature separation and semantic complementary. Second, the unlabeled target client adopts adaptive self-supervised optimization to learn more discrimina-
tive representations via pseudo labels generation. Finally, MDFNet adopts a progressive weighting scheme to balance the effect of each client in model integration on the server, which discoveries more knowledge of the labeled client to adjust the model of unlabeled client during the initial communication rounds, then the mature unlabeled client model also yields positive effect on the feature learning of labeled client. The main contributions of our work are summarized as:
• First, we are the pioneers to take into account the “win-win” and privacy requirements under unsupervised domain adaptation scenarios by introducing Privacy Protected MultiDomain Collaborative Learning (P2MDCL).
• Second, we propose an effective algorithm MDFNet to fight off the domain shift in a federated training mechanism, which reaches the win-win deal for all involved domains.
• Finally, we derive the generalized error bound for our method, which theoretically testifies the rationality of MDFNet. Moreover, extensive experimental results and analysis empirically illustrate the effectiveness of our method on solving P2MDCL.
2 RELATED WORK
Domain Adaptation. Unsupervised domain adaptation (Cui et al., 2020) attempts to build a model with well-labeled source and unlabeled target data at hand, by mitigating the domain mismatch. Along this line, the recent explorations mainly adopt discrepancy metric-based method (Yan et al., 2017; Tzeng et al., 2014) and adversarial training scheme (Zhang et al., 2019; Tzeng et al., 2017) to learn domain-invariant features. Although these solutions effectively reduce the influence of domain discrepancy, the practical applications difficultly permit the co-existence of source and target data due to the limited storage of small device and data privacy. The demand stimulates the development of source-free domain adaptation (Liang et al., 2020; Kurmi et al., 2021), which merely provides the well-trained source model for knowledge adaption on target domain. In addition, Peng et al. (2020) respectively considers target domain and multiple source domains as the centralized server and clients and adopts federated learning fashion to achieve domain adaptation with multiple discriminators, which is vulnerable to the attack due to the source and target features transmission into the discriminators in the centralized target domain. Even though these strategies actually achieve the comparable transferring ability with the UDA solutions, empirical studies illustrate the current domain adaptation techniques fail to learn a generalized model for source and target domains. Alternatively, they only focus on the improvement of target performance, yet neglecting any benefit to source domain. To this end, this paper posts a novel and practical scenario privacy protected multidomain collaborative learning (P2MDCL), where source and target domains are both regarded as clients independently communicating with the server which produces and broadcasts the consensus model to clients for their use.
Federated Learning (FL). FL allows multi-clients collaboratively to complete the same task without data currency across clients (Yang et al., 2019). Along with this concept, recent works mainly focus on semi-supervised scenario (FSSL) where FedMatch (Jeong et al., 2021) allocates unlabeled
data on client side and labeled data in the server while FedIRM (Liu et al., 2021) only deploys them on various clients. But they both assume the instances across all client are sampled from the identical distribution. Moreover, Smith et al. (2017); Liu et al. (2020) explore FSSL with non-i.i.d by supposing each client contains several well-annotated instances for training. Differently, our considered P2MDCL closely approximates the reality, which involves several clients without any annotations and exists significant domain discrepancy across all clients.
3 THE PROPOSED METHOD
3.1 PROBLEM DEFINITION AND MOTIVATION
The P2MDCL scenario assumes there are L well-annotated source clients Dli = {(xl(i)j , y l (i)j)} nli j=1 (i ∈ {1, · · · , L}) and U unlabeled target clients Duk = {xu(k)j} nuk j=1 (k ∈ {L+ 1, · · · , L+ U}), where x and y denote an input sample and its ground-truth label, respectively. The instances of these clients come from different distributions but share the identical category space and clients are not allowed to exchange private data with each other. Akin to federated learning, the additional global server in P2MDCL collects and assembles all clients’ network parameters to form the consensus model. The main motivation of P2MDCL is addressing the negative effect of insufficient training samples in Dli and label shortage in Duk to reach a “win-win” deal across all clients. We face two challenges to solve P2MDCL: 1) how to reduce the significant distribution discrepancy while protecting data privacy and 2) how to learn more generic and discriminative representations from unlabeled target clients. To this end, this work proposes an effective Mask-Driven Federated Network (MDFNet), which deploys mask-driven disentanglement to locally seek domain-specific/invariant features, and explores the adaptive self-supervised optimization to promote the discriminative ability of unlabeled target clients.
3.2 MASK-DRIVEN DISENTANGLEMENT
Feature separation is a commonly-used strategy in domain adaptation to disentangle latent representation into domain-specific features and domain-invariant ones (Bousmalis et al., 2016; Peng et al., 2019). However, they typically develop two separated networks to extract the corresponding features, which increase storage burden for each local device with insufficient computational resources. Peng et al. (2019) points out the high-level neurons from feature extractor actually involve domain-
specific and invariant knowledge. Inspired by (Chattopadhyay et al., 2020), we explore the binary mask to achieve feature disentanglement by activating the interested neurons.
For the brevity, we omit the symbols l/u and (k) in the following illustration. As Figure 2 shows, each client of our MDFNet contains a basic feature encoder parameterized θe mapping the raw input into the hidden space via gi = θe(xi) ∈ Rd. Subsequently, two additional parameters m̂s, m̂I ∈ Rd are introduced into the local network and activated to form the mask probabilities by using the sigmoid function σ(·) to get ms = σ(m̂s) and mI = σ(m̂I). For each feature gi, based on the mask probabilities, we sample the binary domain-specific and invariant masks (msi ,m I i ∈ {0, 1}d) from the Bernoulli distributions. To this end, we obtain the domain specific and invariant features with the element-wise multiplication⊗ over binary masks and features, i.e., gsi = msi ⊗gi and gIi = mIi ⊗gi. Moreover, we adopt three strategies to achieve high-quality feature separation. Concretely, each client firstly minimizes the semantic overlap between gIi and g s i to store complementary information in them. Motivated by Rahman & Wang (2016), we design the following soft-interactive loss as:
Ls = ∑
i 〈gsi , gIi 〉 sum(gsi + g I i − gsi ⊗ gIi ) , (1)
where 〈, 〉 means the inner product of two feature vectors, and sum(·) represents the sum of all elements for a vector. This approximately reflects the information overlap of two mask distributions. The minimization of soft-interactive loss gradually increases the difference between msi and m I i which activate different neurons. Similar to DSN (Bousmalis et al. (2016)), each client also develops the individual classifier θc(·) taking domain-invariant features as input to the category probability distribution θc(gIj ). The cross-entropy loss between ground-truth and prediction intensifies the discriminnative ability of domain-invariant features. On the other hand, we also feed the combination of gsi and g I i into decoder θd(·) to reconstruct the original input with Lr = ∑ i ‖θd(gsi , gIi ) − xi‖22. Thus, the overall loss function of mask-driven disentanglement for labeled clients is formulated as:
min θc,θe,θd,m̂s,m̂I
Llo = ∑
i −yi log
( θc(g I i ) ) + Lr + Ls, (2)
where we actually adopt straight-through estimator (Bengio et al., 2013) to progressively optimize m̂s and m̂I instead of the discrete binary masks leading to the invalid back-propagation.
3.3 ADAPTIVE SELF-SUPERVISED OPTIMIZATION
Due to the availability of annotation in the well-labeled clients, we can easily calibrate the predicted category distribution to generate discriminative features by using the supervision of ground-truth. However, we cannot directly adopt the supervised learning manner to optimize the model in unlabeled clients with the absence of annotation. Inspired by the successful application of pseudo-label on solving UDA issue (Xie et al., 2018; Gu et al., 2020; Liang et al., 2019; Morerio et al., 2020), we thus propose the adaptive clustering optimization module to gradually produce the pseudo-label as “ground-truth” supervision.
Specifically, after each round of communication, the unlabeled client first receives the model broadcast from the server and uses it to initialize the parameters of θe, θd, θc, m̂s, m̂I . Before further optimizing, the client annotates its local data with the received global model, i.e., ŷj = arg maxk θc(g I j )k. With the predictions, the initial centroid of each category is computed
as Ok = ∑ j 1(ŷj=k)g I j∑
j 1(ŷj=k) , where 1(·) is the indicator function. Since the server model integrates
knowledge from multiple clients, the domain shift negatively affects the accuracy of inference ŷj . To decrease the influence, we adopt an iterative approach to further update the class centers and pseudo-label with the local data points. The proposed adaptive clustering optimization mainly includes two operations. The first step is to reassign the label for each instance with the spherical K-means (Buchta et al., 2012):
ŷj = arg min k
d̃ ( gIj ,Ok ) = 1
2
( 1−
〈gIj ,Ok〉 ‖gIj ‖ · ‖Ok‖
) . (3)
With the reattached annotations, the second step is to update the class prototype with Ok =∑ j 1(ŷj=k)g I j
‖gIj ‖ . The above two steps are repeated till convergence.
After the adaptive self-supervised optimization, we attain the final pseudo-label for each sample and use them as supervision to optimize the local models. However, not all samples of unlabeled clients would contribute to the parameter sharing due to the domain mismatch, which in turn causes the low reliability and uncertainty for some samples. In this way, we consider samples to be positive and negative samples, identifying their potential benefit for the labeled clients. Therefore, it is crucial to distinguish positive and negative samples by identifying their potential benefit to the labeled clients. To this end, we add additional entropy-minimization (EM) to further improve the certainty of category prediction, reformulate the Eq. (2) for the unlabeled clients as:
min θc,θe,θd,m̂s,m̂I
Luo = ∑
j
( −I(max(θc(gIj )) ≥ σ)yj log(θc(gIj ))−θc(gIj ) log(θc(gIj )) ) +Lr+Ls,
(4) where I(·) denotes the indicator to filter out samples with θc(gIj ) less than a threshold σ, which is set as 0.1 by default throughout our experiments.
3.3.1 FEDERATED TRAINING
The overall training of our MDFNet involves two important procedures: a) local clients training, and b) global sever model integration. The clients and server collaboratively update these steps per communication round and repeat the process until model convergence or reaching the maximum communication rounds.
Independent Client Training. In each round, the server broadcasts the consensus model integrated from the last round to all available clients for the initialization of local models. The well-annotated clients then employ their local data to optimize all the modules for one epoch via Eq. (2), while the clients without labels rely on the adaptive clustering optimization to generate pseudo-labels for their samples and update their models with Eq. (4). The clients will locally store the parameters of domain-specific mask and decoder and use them to initialize the network in the next round.
Model Integration. After the local training, the clients will send their local models (excluding the parameters of domain-specific mask and decoder) to the server, where the models are integrated to achieve consensus. However, adopting pseudo-labels as supervision significantly reduce the reliability of the models, especially in the initial training stages. To avoid the negative effect of pseudo-labels, the server considers providing different weights to labeled and unlabeled clients as: θ̃ = 1−ηrL ∑L i=1 θ(li) + ηr U ∑L+U i=L+1 θ(ui), where θ ∈ {θe, θd, θc, m̂I , m̂s} and ηr = 1−exp(−ρr) 2(1+exp(−ρr)) , r is the round of communication and each round means one epoch, and ρ is set as 10 in experiments.
3.4 GENERALIZED ERROR BOUND ANALYSIS
We firstly define the basic notation and employ them to derive the generalization error bound for P2MDCL from the high-level interpretation and the specific proofs are shown in the supplementary.
Notation. Given the distributions of labeled and unlabeled clients Dli and Duj on the input space X , we have access to the ground-truth labeling function fli : X → {0, 1} for the clients with the annotation, while also have the pseudo labeling function fuj : X → {0, 1} available for the unlabeled clients. A hypothesis is corresponding to a function h : X → {0, 1} with the error, i.e., li(h, fli) := Ex∼Dli [|h(x − fli(x)|] and uj (h, fuj ) := Ex∼Duj [|h(x − fuj (x)|]. Thus, the risk and the empirical risk of hypothesis h on Dli and Duj are respectively represented as li(h), ̂li(h), and uj (h), ̂uj (h). Moreover, we define the H-divergence between two arbitrary distributions D and D′ as dH(D,D ′ ) = 2 supA∈AH |PrD(A) − PrD′ (A)|, where H means the hypothesis class for input space X and AH is the collection of subsets of X that are the support of some hypothesis inH. The symmetric difference space with the hypothesis class is formulated asH∆H = {h(x) ∗ h′(x)|h, h′ ∈ H}, where ∗ denotes the XOR operation. Our model aims to learn a consensus model through the communication between the server and all available clients. Such learning strategy actually attempts to minimize a convex combination of empirical risks over all clients with parameters αi ( ∑L+U i=1 αi = 1) as ̂α = ∑L i=1 αîli(h) +∑L+U
i=L+1 αi ui(h). Similarly, we obtain the weighted combination of the risks over all clients as α(h). In addition, since each client independently trains the model with its specific data, we denote the optimal hypothesises achieving the minimum risk on the labeled and unlabeled clients as h∗li :=
arg minh∈H li(h) and h ∗ uj := arg minh∈H uj (h). With these definitions, it still is intractable to directly deduce the generalized error bound under this scenario. Therefore, we alternatively divide the entire problem into multiple sub-problems and solve them in each client.
Concretely, we first explore the relationship between α and li or uj with Lemma 1. Second, we drive the upper bound of the difference between α and ̂α via Lemma 2. Thus, we easily deduce the generalized error bound of a hypothesis per client in Theorem 1.
Lemma 1. Suppose the h is a hypothesis of classH, for each unlabeled client, we then achieve: | α(h)− uj (h)| ≤ L∑ i=1 αi( 1 2 dH∆H(Dli ,Duj ) + λli) + L+U∑ i=L+1,i6=j αi( 1 2 dH∆H(Dui ,Duj ) + λui),
where λli := li(h ∗) + uj (h ∗) and h∗ is the hypothesis which achieves the minimum risk on Dli and Duj , and λui similarly means the risk of optimal hypothesis on the mixture of Dui and Duj . Akin to unlabeled clients, we also derive the analogous inequality in clients with ground-truth as:
| α(h)− lj (h)| ≤ L∑
i=1,i6=j
αi( 1
2 dH∆H(Dli ,Dlj ) + λli) + L+U∑ i=L+1 αi( 1 2 dH∆H(Dui ,Dlj ) + λui),
where λli is the risk of optimal hypothesis of Dli and Dlj , and λui := ui(h∗) + lj (h∗). Lemma 2. Given a hypothesis spaceH of VC-dimension d, if a random sample of size n is generated by selecting nβj data points from Dlj or Duj , and annotating them through flj and fuj , then with probability at least 1− δ, ∀h ∈ H, we have:
|̂α(h)− α(h)| ≤ √∑L+U j=1 α2j βj √ d log(2n)− log δ 2n .
Theorem 1. Suppose given nβi labeled instances from client Dli for i = 1, · · · , L, and nβj unlabeled instances from client Duj in a federated learning system, we define ĥ = arg minh∈H ̂α(h), h∗li := arg minh∈H li(h) and h ∗ uj := arg minh∈H uj (h). Then, ∀αi ∈ R +, ∑L+U i=1 αi = 1, with probability at least 1− δ over the choice of samples from each client:
uj (ĥ) ≤ uj (h∗uj ) + 2 √∑L+U j=1 α2j βj √ d log(2n)− log δ 2n
+ 2 (∑L
i=1 αi(
1 2 dH∆H(Dli ,Duj ) + λli) + ∑L+U i=L+1,i6=j αi( 1 2 dH∆H(Dui ,Duj ) + λui) ) .
For the annotated client, we can achieve the similar inequality. From Theorem 1, we explicitly observe the risk of a hypothesis with the federated training manner on the client is determined by three components: the error of the optimal hypothesis h∗uj on its own samples, the VC-dimension constraint and the distribution discrepancy across various clients. To effectively reduce the risk of a hypothesis on all clients, we should not only learn the discriminative features by the independent client training, but also attempt to solve the domain shift with the constraint of data privacy.
4 EXPERIMENTS
4.1 EXPERIMENTAL SETTINGS
Datasets. Image-CLEF collects visual signals from three domains: Caltech-256 (C), ImageNet ILSVRC 2012 (I) and Pascal VOC 2012 (P) with the same number of samples. Concretely, arbitrary subset includes 600 images evenly distributed in 12 categories. Office-Home (Venkateswara et al., 2017) consists of four domains: Artistic images (Ar, 2,183), Clip Art (Cl, 4,365), Product images (Pr, 4,439) and Real-World images (Rw, 4,357), which share the identical 65 object categories. To verify the “win-win” deal, we randomly split the original data of labeled client into the training and test sets evenly, and repeat this operation for ten times1.
1We further report the comparison under the original protocol of SHOT in supplemental materials, where all source samples are used for training and the evaluation is only on target domain.
Baselines. To the best of our knowledge, we are the pioneers to consider P2MDCL scenario, and we aim to assess if our algorithm can learn a server model with higher generalization ability to enhance the performance across all clients via federated training. To explicitly testify the generalization of model on each client, this section focuses on P2MDCL with a labeled client and an unlabeled one. Noted that we report the P2MDCL with more clients in supplementary material. Since this scenario is similar with UDA and source-free DA, we not only select CDAN (Long et al., 2017) and SRDC (Tang et al., 2020) achieving the state-of-the-art results on UDA problem as the benchmarks and also regard the SHOT (Liang et al., 2020) as one important competitor. In addition, we also consider the source-only method merely training the model on the labeled client. For the mentioned baselines, we use their released code with the suggested parameters to carry out each task for ten times
Evaluation Metric. In terms of the data organization, the labeled client includes the training and testing sets without any overlap, while all samples of the unlabeled client participant the model training and evaluation. For UDA and source-free solutions, the training set of labeled client is considered as the source domain and the unlabeled client servers as the target domain. During the test stage, the final model learned by each method is not only evaluated on the test set of labeled client with the corresponding source accuracy ACCs but also tested on the unlabeled client with the target accuracy ACCt. Moreover, to comprehensively reflect the generalization of model, we adopt the Harmonic Mean (HM) (Dixon & Chapman, 1980) defined as HM = 2×ACCs×ACCtACCs+ACCt .
Implementation Details. We implement our MDFNet with Pytorch as platform. The encoder of each client includes ResNet-50 pre-trianed on ImageNet dataset (Krizhevsky et al., 2012) without the last FC layer and two new additional FC layers (2,048→512→128) followed the ResNet-50. The decoder consists of two FC layers (256→512→2,048) and the classifier only includes one FC layer. The dimensions of m̂s and m̂I are both 2,048. For the training period, we fix the parameters of the pre-trained ResNet-50 and adopt the stochastic gradient descent (SGD) as the optimizer with momentum 0.9. Following (Zhang et al., 2019), the learning rate is adjusted by ζr = ζ0(1+10r)0.75 where ζ0 = 0.01 and r is the communication round. The code is available in the supplementary.
4.2 RESULT ANALYSIS
Table 1 and Table 2 report the average image recognition results in terms of random data split and various model training. According to the results, we achieve four meaningful and interesting conclusions as below.
First, compared with others, the model learned by our training strategy achieves the best classification accuracy, when evaluated on the test set of labeled and unlabeled clients. In terms of the average harmonic metric, our MDFNet performs better than the second best SHOT by 3.4% on Office-Home. It illustrates that even if the data privacy hinders the currency of knowledge between these two clients, our method still employs the federated
training paradigm to gradually eliminate the domain shift across different clients to improve the generalization of model. Second, although the UDA based solutions and SHOT effectively facilitates the well-trained source models to adapt the data distribution of unlabeled client, the progressive adaptation discards considerable source knowledge and results in performance degradation on the test set of labeled client. For instance, CDAN achieves better average accuracy on
unlabeled client with office-home than source-only method, i.e., 61.42 v.s. 57.22%. However, such improvement heavily affects the generalization of CDAN on the labeled client, i.e., CDAN (75.75%) vs Src-only (85.31%). Different from CDAN, our method even attains the more improvement than the source-only method on the test set, especially for the task P→I of Image-CLEF dataset, where our MDFNet surpasses the source-only method by 6.9%. Thirdly, even though SHOT and MDFNet both protect the data privacy of source domains, our MDFNet learns a better hypothesis with lower error on the unlabeled client. Concretely, for the task Ar→Rw, when assessed on the unlabeled client, our method fights off SHOT by 4.5%, which means our training manner captures more knowledge from the labeled client via the frequent communication between server and clients to improve the discriminative ability of model on recognizing unlabeled instances. Finally, we find that all methods achieve higher classification accuracy on unlabeled clients than that of labeled clients for tasks P→C and P→I. Specifically, with P as the source domain, the well-trained Src-Only model achieves better performance on the target domain than that of source test set. The main reason lies in the fact that the samples of P domain lie in a more diverse distribution within class, which makes it difficult to learn a discriminative model to recognize its images. Concretely, P domain includes many images with multiple objects but one single label. For example, one image of ”bird” class in P domain involves bird and dog, and another image includes bird and person. However, there are almost no such multi-object images in C and I domains. Moreover, the same class in P domain has more animals than that of C or I domain. For instance, besides several common birds as C and I domains, the bird class of P domain also consists chicken and ostrich, etc. Thus, the well-trained source model with P domain can easily classify the images of C and I domains, but difficultly recognize its source test set.
4.3 EMPIRICAL ANALYSIS
Feature Visualization & Confusion Matrix. Our MDFNet aims to eliminate the distribution discrepancy across different clients and improve the generalization of model with the data privacy protection. Thus, we extract the hidden features from Src-only, SHOT and MDFNet on task Ar→Rw
and follow (Zhang et al., 2019) to draw the the feature embedding of the test samples of labeled client and those of unlabeled client in 2-D canvas. From Figure 3, we notice that our MDFNet achieves better alignment across these two clients and significantly promotes the discriminative ability of model as class boundaries are explicit among various categories. Moreover, we utilize the confusion matrix to analyse the model performance on the test set of labeled client (P domain of Image-CLEF). As Figure 4 shows, our method accurately distinguishes several similar objects as bike and motorcycle when compared with Src-only, which illustrates our MDFNet transfers the valuable semantic from unlabeled client to assist model recognizing the samples of labeled client.
Ablation Study & Convergence. To reveal the importance of adaptive self-supervised optimization module, we attempt to remove the supervisor of pseudo-label and consider it as a variant (Ours-SSO) of MDFNet and evaluate the model on the unlabeled client. Their comparison is reported in Figure 5 (c) where the variant suffers from the obvious performance degradation, which demonstrates the self-supervised learning based module effectively facilitates the model to learn more discriminative features. In addition, we further adjust the number of training samples in the labeled client (training set/test set = 3:7) and contrast the performances of three methods on the unlabeled client. The result in Figure 5 (a) reports our MDFNet still beats others in most tasks even with insufficient well-annotated samples. Finally, we record the relationship between classification accuracy and the communication round in Figure 5 (b) where the model is assessed on the test set of labeled client and unlabeled client. The performance means MDFNet rapidly achieves the convergence and has no negative effect on recognizing samples of labeled client during adaptation.
5 CONCLUSION
Although UDA based methods effectively avoid performance degradation when applying source knowledge to target domain, the UDA assumption ignores the improvement of model generalization on source domain and conflicts with privacy protection. Thus, this paper formulates these practical demands with domain adaptation as a novel scenario P2MDCL and proposes mask-driven federated network (MDFNet) to address this challenge. Concretely, each individual domain explores mask disentangle mechanism to learn domain-invariant features, and the unlabeled clients exploits adaptive self-supervised optimization to generate high-quality pseudo labels facilitating discriminative feature learning. Moreover, the centralized server refines the global invariant by assembling local knowledge across all domains. Finally, theoretical and experimental analysis demonstrate the rationality and effectiveness of our MDFNet on solving P2MDCL problem. | 1. What is the focus and contribution of the paper on privacy-protected multi-domain collaborative learning?
2. What are the strengths of the proposed approach, particularly in dealing with the domain shift in federated training?
3. What are the weaknesses of the paper regarding its motivation, explanation, and implementation details?
4. Do you have any concerns or questions regarding the proposed Mask-Driven Federated Network (MDFNet)?
5. Are there any limitations or potential risks associated with the approach that the authors did not discuss? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a Mask-Driven Federated Network (MDFNet) to reach a “win-win” deal for multiple domains with data protected to solve the Privacy Protected Multi-Domain Collaborative Learning (P2MDCL) problem. Specifically, each domain is armed with an individual local model via a mask disentangled mechanism to learn domain-invariant semantics. Second, the centralized server refines the global invariant model by integrating and exchanging local knowledge across all domains. Moreover, adaptive self-supervised optimization is deployed to learn discriminative features for unlabeled domains.
Review
Strengths:
This paper proposes the Mask-Driven Federated Network (MDFNet) to reach a “win-win” deal for multiple domains with data protected.
The proposed MDFNet is developed to cope with the domain shift in federated training mechanism.
This paper derives the generalized error bound for the proposed model.
Weaknesses:
It’s not clear of the motivation of this method for coping with the P^2MDCL. Specifically, why does this paper use orthogonal masks? How does the two orthogonal masks achieve domain-specific/invariant features?
It’s not clear about the adaptive self-supervised optimization. For example, it needs more explanations in updating process of adaptive clustering optimization for Eq.(3). Specifically, what’s the parameters of updating the class prototype \mathcal{O}_{k}?
It fails to clearly introduce all implementation details. To be specific, how do the hyperparameters affect the performances? such as \sigma in Eq.(4) and other related parameters in the training stage. |
ICLR | Title
Local Image-to-Image Translation via Pixel-wise Highway Adaptive Instance Normalization
Abstract
Recently, image-to-image translation has seen a significant success. Among many approaches, image translation based on an exemplar image, which contains the target style information, has been popular, owing to its capability to handle multimodality as well as its suitability for practical use. However, most of the existing methods extract the style information from an entire exemplar and apply it to the entire input image, which introduces excessive image translation in irrelevant image regions. In response, this paper proposes a novel approach that jointly extracts out the local masks of the input image and the exemplar as targeted regions to be involved for image translation. In particular, the main novelty of our model lies in (1) co-segmentation networks for local mask generation and (2) the local mask-based highway adaptive instance normalization technique. We demonstrate the quantitative and the qualitative evaluation results to show the advantages of our proposed approach. Finally, the code is available at https://github.com/AnonymousIclrAuthor/ Highway-Adaptive-Instance-Normalization.
1 INTRODUCTION
Unpaired image-to-image translation (or in short, image translation) based on generative adversarial networks (Goodfellow et al., 2014) aims to transform an input image from one domain to another, without using paired data between different domains (Zhu et al., 2017a; Liu et al., 2017; Kim et al., 2017; Choi et al., 2018; Liu et al., 2017). An unpaired setting, however, is inherently multimodal, denoting a single input image can be mapped to multiple different outputs within a target domain. For example, when translating the hair color of a given image into a blonde, the detailed region (e.g., upper vs. lower, and partial vs. entire) and color (e.g., golden, platinum, and silver) may vary.
Previous studies have achieved such multimodal outputs by adding a random noise sampled from a pre-defined prior distribution (Zhu et al., 2017b) or taking a user-selected exemplar image as additional input, which contains the detailed information of an intended target style (Chang et al., 2018). Recent studies (Lin et al., 2018; Ma et al., 2018) including MUNIT (Huang et al., 2018) and DRIT (Lee et al., 2018) combine those two approaches, showing the state-of-the-art performance by separating (i.e., disentangling) content and style information of a given image through two different encoder networks. However, existing exemplar-based image translation method has several limitations as follows. First, the style information is typically extracted and encoded from the entire region of a given exemplar, thus being potentially noisy due to those regions involved with respect to the target attribute to transfer. Suppose we translate the hair color of an image using an exemplar image. Since the hair color information is available only in the hair region of an image, the style information extracted from the entire region of the exemplar may contain the irrelevant information (e.g., color of the wall and edge pattern of the floor), which should not be reflected in the intended image translation.
On the other hand, the extracted style is then applied to the entire region of the target image, even though particular regions should be kept as it is. Due to this limitation, some of the previous approaches (Huang et al., 2018; Lee et al., 2018) often distort irrelevant regions of an input image such as the background.
Furthermore, when multiple attributes are involved in an exemplar image, one has no choice but to impose all of them when translating a given image. For example, in a person’s facial image translation, if the exemplar image has two attributes, (1) a smiling expression and (2) a blonde hair, then both attributes have to be transferred with no other options.
To tackle these issues, we propose a novel, LOcal Mask-based Image Translation approach, called LOMIT, which jointly generates a local, pixel-wise soft binary mask of an exemplar (i.e., the source region from which to extract out the style information) and that of an input image to translate (i.e., the target region to which to apply the extracted style). This approach has something in common with those recent approaches that have attempted to leverage an attention mask in image translation (Pumarola et al., 2018; Chen et al., 2018; Yang et al., 2018; Ma et al., 2018; Mejjati et al., 2018). In most approaches, the attention mask (extracted from an input image) plays a role of determining the target region to apply a translation. Note that we expand those approaches by jointly extracting two masks, from an input and an exemplar image, respectively, acting as the attention mask of an input and a relevant region (foreground) extractor of an exemplar.
The main novelty of LOMIT is that to jointly obtain the local masks of two images, we utilize the cosegmentation networks (Rother et al., 2006), which aim (1) to extract the targeted style information without noise introduced from irrelevant regions and (2) to translate only the necessary region of a target image while minimizing its distortion. While co-segmentation approaches were originally proposed to capture the regions of a common object existing in multiple input images (Rother et al., 2006; Li et al., 2018), we adopt and train co-segmentation networks for our own purpose.
Once obtained local masks, LOMIT extends a recently proposed technique for image translation, called adaptive instance normalization, using highway networks (Srivastava et al., 2015), which computes the weighted average of the input and the translated pixel values using the abovementioned pixel-wise local mask values as different linear combination weights per pixel location. LOMIT has an additional advantage of being able to manipulate the computed masks to selectively transfer an intended style, e.g., choosing either a hair region (to transfer the hair color) or a facial region (to transfer the facial expression).
The effectiveness of LOMIT is evaluated on two facial datasets, via a user study and other quantitative measures such as the inception score and the classification accuracy.
2 BASIC SETUP
We define “content” as common features (an underlying structure) across all domains (e.g., the pose of a face, the location and the shape of eyes, a nose, a mouth, and hair), and “style” as a representation of the structure (e.g., background color, facial expression, skin tone, and hair color).
As shown in Fig. 1, we assume that an image x can be represented as x = c⊕ s, where c is a content code in a content space, and s is a style code in a style space. The operator⊕ combines and converts the content code c and the style code s into a complete image x.
By considering the local mask indicating the relevant region (or simply, the foreground) to extract the style from or to apply it to, we further assume that s is decomposed into s = sf ⊕ sb, where sf is the style code extracted from the foreground region and sb is that from the background region. Separating an integrated style space S into a foreground style space Sf and a background style space Sb play a role of disentangling style feature representation1. The pixel-wise soft binary mask m of an image x is represented as a matrix with the same spatial resolution of x. Each entry of m lies between 0 and 1, which indicates the degree of the corresponding pixel belonging to the foreground. Then, the local foreground/background regions xf /xb of x is obtained as
xf = m x, xb = (1−m) x, (1)
where is an element-wise multiplication. Finally, our assumption is extended to x = c⊕ sf ⊕ sb, where c, sf , and sb are obtained by the content encoder Ec:, the foreground style encoder Efs , and the background style encoder Ebs , respectively, which are all shared across multiple domains in LOMIT, i.e.,
{cx, sfx, sbx} = {Ec(x), Efs (xf ), Ebs(xb)} cx ∈ C, sfx ∈ Sf , sbx ∈ Sb (2)
It is critical in LOMIT to properly learn to generate the local mask involved in image translation. To this end, we propose to combine the mask generation networks with our novel highway adaptive instance normalization, as will be described in Section 3.2.
1 To verify the representations are properly disentangled, we refer the readers to the 2D embedding visualization of each space (C,Sf ,Sb) in Fig. 8 in in the appendix.
3 LOCAL IMAGE TRANSLATION MODEL
We first denote x1 ∈ X1 and x2 ∈ X2 as images from domains X1 and X2, respectively. As shown in Fig. 2, LOMIT converts a given image x1 to X2 and vice versa, i.e., x1→2 = G(h(Ec(x1), E f s (x f 2 ), E b s(x b 1))), and x2→1 =G(h(Ec(x2), E f s (x f 1 ), E b s(x b 2))), where G is decoder networks and h is our proposed, local mask-based highway adaptive instance normalization (or in short, HAdaIN), as will be described in detail in Section 3.2.
For a brevity purpose, we omit the domain index notation in, say,m = {m1,m2} and x = {x1, x2}, unless needed for clarification.
3.1 LOCAL MASK EXTRACTION
LOMIT utilizes the local mask m to separate the image x into the foreground and the background regions, xf and xb. That is, we jointly extract the local masks of the input and the exemplar images, as those effectively involved in image translation, via co-segmentation networks. For example, given the input image and the exemplar, if LOMIT identifies the the hair color difference of a facial image, e.g., blonde vs. black, then, our local masks should be obtained as the hair regions from two images.
As shown in Fig. 3, given two images x1 and x2, the co-segmentation networks first encode the content of each as c1 and c2 via the content encoder Ec. Next, in the case of computing the segmentation of x2, after average-pooling c1 globally, we forward it to an MLP to obtain the channel-wise soft binary mask cattn1 , which is then multiplied with c2 in a channel-wise manner, i.e., c attn 1 c2, where cattn1 = σ(MLP(c1)). This step works as transferring the objects information from x1 to x2. Finally, we forward-propagate this output into the attention network A to obtain the local mask m2 of x2, i.e., m2 = A(cattn1 c2). The same process applies to the opposite case in a similar manner, resulting in m1 = A(cattn2 c1). Note that our co-segmentation networks are trained in an end-to-end manner with no direct supervision.
3.2 HIGHWAY ADAPTIVE INSTANCE NORMALIZATION
Adaptive instance normalization is an effective style transfer technique (Huang & Belongie, 2017). Generally, it matches the channel-wise statistics, e.g., the mean and the variance, of the activation map of an input image with those of a style image. In the context of image translation, MUNIT (Huang et al., 2018) extends AdaIN in a way that the target mean and the variance are computed
as the outputs of the trainable functions β and γ of a given style code, i.e., AdaINx1→x2(c1, s2) = γ(s2) ( c1 − µ(c1) σ(c1) ) + β(s2), (3)
As we pointed out earlier, such a transformation is applied globally over the entire region of an image, which may unnecessarily distort irrelevant regions. Hence, we formulate our local maskbased highway AdaIN (HAdaIN) as
HAdaINx1→x2(m1, c1, s f 2 , s b 1) = m1 AdaINx1→x2(c1, s f 2 ) + (1−m1) AdaINx1→x1(c1, sb1), (4) where each of β and γ in Eq. (3) is defined as a multi-layer perceptron (MLP), i.e., [β(sf ); γ(sf )] = MLPf (sf ) and [β(sb); γ(sb)] = MLPb(sb). Note that we use different MLPs for the foreground and the background style code inputs. The first term of Eq. (4) corresponds to the local region of an input image translated by the foreground style, while the second corresponds to the complementary region where the original style of the input is kept as it is.
4 TRAINING OBJECTIVES
This section describes each of our loss terms in the objective function used for training our model.
4.1 STYLE AND CONTENT RECONSTRUCTION LOSS
The foreground style of the translated output should be close to that of the exemplar, while the background style of the translated output should be close to that of the original input image. We formulate this criteria as the following style reconstruction loss terms:
L1→2sf = Exf1→2,xf2 [‖E f s (x f 1→2)− Efs (x f 2 )‖1] (5) L1→2sb = Exb1→2,xb1 [‖E b s(x b 1→2)− Ebs(xb1)‖1]. (6)
From the perspective of content information, the content feature of an input image should be consistent with its translated output, which is represented as the content reconstruction loss as L1→2c = Ex1→2,x1 [‖Ec(x1→2)− Ec(x1)‖1]. (7) Note that the content reconstruction is imposed across the entire region of the input image, regardless of the local mask.
4.2 IMAGE RECONSTRUCTION LOSS
As an effective supervision approach in an unpaired image translation setting, we adopt the imagelevel cyclic consistency loss (Zhu et al., 2017a) between an input image and its output through two consecutive image translations of X1 → X2 → X1 (or X2 → X1 → X2), i.e.,
L1→2→1cyc = Ex1 [‖x1→2→1 − x1‖1] . (8)
Meanwhile, similar to previous studies (Huang et al., 2018; Lee et al., 2018), we translate not only (x1 → x1→2) but also (x1 → x1→1). This intra-domain translation (x1 → x1→1) should work similarly to auto-encoder (Larsen et al., 2016), and the corresponding loss term is written as
L1→1x = Ex1 [‖x1→1 − x1‖1] (9)
4.3 DOMAIN ADVERSARIAL LOSS
To approximate the real-data distribution via our model, we adopt the domain adversarial loss by introducing the discriminator networks Dsrc. Among the loss terms proposed in the original GAN(Goodfellow et al., 2014), LSGAN(Mao et al., 2017), and WGAN-GP(Arjovsky et al., 2017; Gulrajani et al., 2017), we chose WGAN-GP, which is shown to empirically work best, as an adversarial method. That is, our adversarial loss is written as
L1→2adv = Ex1 [Dsrc(x1)]− Ex1→2 [Dsrc(x1→2)]− λgp Ex̂[(‖∇x̂Dsrc(x̂)‖2 − 1)2], (10) where x1→2 = G(h(c1, s f 2 , s b 1)), x̂ is a sampled value from the uniform distribution, and λgp = 10. Also, we apply the loss proposed in patchGAN (Isola et al., 2017; Zhu et al., 2017a).
4.4 MULTI-ATTRIBUTE TRANSLATION LOSS
we use an auxiliary classifier (Odena et al., 2016) to cover multi-attribute translation with a single shared model, similar to StarGAN (Choi et al., 2018). The auxiliary classifier Dcls, which shares the parameters with the discriminator Dsrc except for the last layer, classifies the domain of a given image. In detail, its loss term is defined as
L1→2clsr = Ex1 [− logDcls(yx1 |x1)] (11) L1→2clsf = Ex1→2 [− logDcls(yx2 |x1→2)] , (12)
where yx is the domain label of an input image x. Similar to the concept of weakly supervised learning (Zhou et al., 2016; Selvaraju et al., 2017), This loss term plays a role of supervising the local mask m to point out the proper region of the corresponding domain y through the HAdaIN module, allowing our model to extract out the style from its proper region of the exemplar.
4.5 MASK REGULARIZATION LOSSES
We impose several additional regularization losses on local mask generation to improve the overall image generation performance as well as the interpretability of the generated mask.
The first regularization is to minimize the difference of the mask values of those pixels that have similar content information. This helps the local mask consistently capture a semantically meaningful region as a whole, e.g., capturing the entire hair region even when the lighting conditions and the hair color vary significantly within the exemplar. In detail, we design this regularization as minimizing
R1 = E ∑ i=1,··· ,W,j=1,··· ,H [ |(m ·~1T )− (~1 ·mT )| (ĉ · ĉT ) ] ij (13) where ~1 is a vector whose elements are all ones, {~1,m} ∈ RWH×1, and ĉ ∈ RWH×C where ĉ = c ‖c‖ . The first term is the distance matrix of all the pairs of pixel-wise mask values in m, and the second term is the cosine similarity matrix of all the pairs of C-dimensional pixel-wise content vectors. Note that we backpropagate the gradients generated by this regularization term only through m to train the co-segmentation networks, but not through ĉ, which does not affect the encoder E.
The second regularization is to make the local masks of the two images capture only those regions having contrasting styles. This regularization is useful especially when multiple attributes are involved in image translation. For example, if the two facial images have different hair colors but common facial expressions, then the local mask should indicate only the hair region. We formulate this regularization by maximizing the style difference between the local mask of two images, which is written as
R2 = −E [ ‖sf1 − s f 2‖1 ]
(14)
The third regularization is simply to minimize the local mask region (Chen et al., 2018; Pumarola et al., 2018) to encourage the model to focus only on a necessary region involved in image translation, by minimizing
R3 = E ‖m‖1 (15)
4.6 FULL LOSS
Finally, our full loss is defined as
LD = −Ladv + λclsLclsr , LG = Ladv + λclsLclsf + λs,c(Lsf + Lsb + Lc) + λx(Lcyc + L1→1x + L2→2x )
+λ1R1 + λ2R2 + λ3R3, (16)
where L without superscript denotes (1→ 2, 2→ 1), λcls = 1, λs,c = 1, λx = 10, λ1 = 0.1, λ2 = 0.01, and λ3 = 0.0001. Note that our training process contains both intra-domain translation, (x1 → x1→1 and x2 → x2→2), and inter-domain translation, (x1 → x1→2 and x2 → x2→1).
5 EXPERIMENTS
We conduct LOMIT and other baseline models on two facial datasets, CelebA (Liu et al., 2015) and EmotioNet (Fabian Benitez-Quiroz et al., 2016). We first describe the datasets and the baseline models in subsection 5.1, 5.2. Second, we present the qualitative comparisons of both multi- and single-attribute translation results against baseline methods in subsection 5.3. Third, we report the user study results to validate the human-perceived quality of the translated results in subsection 5.4. Lastly, we evaluate the performances of LOMIT using the inception score (Salimans et al., 2016) and the classification accuracy. The model architecture and training details we set in the experiments are described in appendix (subsection 7.1, 7.2).
5.1 DATASETS
CelebA. The CelebA (Liu et al., 2015) dataset consists of 202,599 face images of celebrities and 40 attribute annotations per image. We pick 10 attributes (i.e., black hair, blond hair, brown hair, smiling, goatee, mustache, no beard, male, heavy makeup, wearing lipstick) that would convey meaningful local masks. We randomly select 2,000 images for testing and use the others for training. Images are center-cropped and scaled down to 128×128.
EmotioNet. The EmotioNet (Fabian Benitez-Quiroz et al., 2016) dataset contains 975,000 images of facial expressions in the wild, each annotated with 12 action units (AUs). Each AU denotes an activation of a specific facial muscle (e.g., jaw drop, nose wrinkler). We crop each image using a face detector 2 and resize them to 128×128. We use 2,000 images for testing and 200,000 images for training.
5.2 BASELINE METHODS
MUNIT. MUNIT (Huang et al., 2018) decomposes an image into the domain-invariant content code and the domain-specific style code. Involving random sampling for latent style codes while
2https://github.com/ageitgey/face_recognition
training, MUNIT attempts to reflect the multimodal nature of various style domains. We implement MUNIT to be trained on CelebA (Liu et al., 2015) dataset and report results for our comparison.
DRIT. DRIT (Lee et al., 2018) employs two encoders of which each extracts the domain-invariant content information and domain-specific style information, respectively. The model is trained leveraging a content discriminator that ensures the content space to be shared. Loss functions and training processes are similar to MUNIT.
5.3 QUALITATIVE RESULTS
CelebA. As shown in Fig. 4, we compare our model with the baseline models using CelebA dataset (Liu et al., 2015). The baseline models are trained with a dataset corresponding to each target attribute at the topmost column (e.g., in the gender case, a dataset is divided into male/female). On the other hand, when training LOMIT, we construct a set of multiple attributes by combining a few different attributes and train the model for multi-attribute translation (i.e., the columns in a black border, (a), (b), and (c) in Fig. 4). Meanwhile, in order to conduct the single-attribute translation, we interactively modify the output masks of the co-segmentation module and forward the manipulated masks into the networks. That is, as illustrated in Figs. 4(a’), (b’), and (c’), we manually remove the mask for both an input and an exemplar and obtain the result for single-attribute translation. Note that the area marked as a red rectangle in the mask indicates the removed area.
The first four rows correspond to the input images, their generated masks, the exemplars, and their generated masks. Each of the last three rows provides comparisons of our model against the baselines. The topmost row indicates the target attribute for each column. Those in black denote multiple attributes while those in red represent a single attribute after removing the mask subregion (a red rectangle) corresponding to the other attribute. We denote Facial Hair when belonging to any of the three classes, Beard, Goatee or Mustache. LOMIT tends to keep the background intact across various classes and apply the style to the appropriate region, while transferring the properly extracted style (attribute) from the exemplar. When compared to LOMIT, the two other models suffer from undesirable distortion in the background as shown in the first and the second rows from the bottom of (a), (a’). Meanwhile, as can be seen in the bottommost row of (e), MUNIT fails to apply the style to the suitable region due to the lack of an attention mask (through the highway networks). The images of DRIT in the columns (e’) show a translation through the improperly extracted style because the hair region on the images contains a white color which seems to be referenced from the shoulder of a person in the exemplar. It indicates that a mask for the exemplar should be properly incorporated in the process. From the comparison with the baseline models, we justify the needs of the local masks and the HAdaIN module of LOMIT. EmotioNet. Fig. 5 shows the results for AU translation. For the training, we use each AU (1, 2, 4, 5, 6, 9, 12, 17, 20, 25, 26, and 43) as a label for the multi-attribute translation loss, so that the model can be trained for translating multi-AUs from the exemplar. Each section is composed of an input image, its mask, an exemplar, its mask, and a translated output. For example, the left top part, the input containing AUs 1, 4, 25 (Expressionless) takes the exemplar whose AUs are 6, 12, 25 (Happy).
5.4 QUANTITATIVE EVALUATION
DRIT MUNIT LOMIT Class IS CA IS CA IS CA
Mean Std (%) Mean Std (%) Mean Std (%)
Facial Hair 0.3295 0.24 23.8 0.2808 0.25 60.0 0.3105 0.27 71.4 Gender 0.2703 0.21 21.1 0.1368 0.19 53.0 0.2348 0.22 83.9
Wearing Lipstick 0.2685 0.20 19.9 0.1751 0.20 57.1 0.2528 0.20 73.7 Facial Hair + Gender 0.3805 0.24 14.3 0.1972 0.25 34.9 0.2069 0.23 68.1
Makeup + Wearing Lipstick 0.2853 0.21 16.1 0.2472 0.24 27.3 0.2834 0.22 72.8
Inception score and classification accuracy. We compare LOMIT with the baselines using inception score (Salimans et al., 2016) (IS) and classification accuracy (CA). IS is high if translated images are diverse and high quality. We follow the procedure in MUNIT (Huang et al., 2018) to obtain IS. For the classification, we use the pretrained Inception-v3 (Szegedy et al., 2016) and finetune with CelebA (Liu et al., 2015) dataset. To be classified well with high accuracy, a translated image must have appropriate attribute in the exemplar. Table 1 lists up the resulting scores and accuracies. In terms of the CA, LOMIT achieves the highest accuracy across all classes evaluated by large margins. DRIT achieves slightly higher IS than LOMIT, but in the cost of the CA. It indicates that DRIT produces diverse outputs, however with less recognizable image outputs.
6 CONCLUSIONS
In this work, we proposed a local mask-based image-to-image translation model called LOMIT. The co-segmentation networks jointly generate the mask of an input image and that of an exemplar, respectively. A mask of the exemplar exclude an irrelevant region to extract the style information from a relevant region. The other mask of an input captures the region to apply the style of an exemplar while maintaining an original style in the rest (through our highway adaptive instance normalization). LOMIT achieves outstanding results compared with the state-of-the-art methods (Huang et al., 2018; Lee et al., 2018). As future work, we will extend our approach as a general normalization method which can be used in other computer vision tasks.
7 APPENDIX
7.1 MODEL ARCHITECTURES
Content Encoder. Similar to MUNIT (Huang et al., 2018), the content encoderEc is composed of two strided-convolutional layers and four residual blocks (He et al., 2016). Following the previous approaches (Huang & Belongie, 2017; Nam & Kim, 2018), IN is used across all the layers in the content encoder.
Style Encoders. The style encoders Efs , Ebs have the same architecture but with different parameters. They consist of four strided-convolutional layers, a global average pooling layer, and a fullyconnected layer. The style codes sf , sb are eight-dimensional vectors. Also, style encoders Efs , E b s share first few layers as the first few layers detect low-level feature. To maintain the style information, we do not use IN in the style encoders.
Co-segmentation Networks. Co-segmentation networks are composed of six convolutional layers with a batch normalization (Ioffe & Szegedy, 2015). MLP in Fig. 3 has two linear layers with tanh and sigmoid activation functions, respectively.
Decoder. Decoder G has four residual blocks and two convolutional layers with an upsampling layer each. Because the layer normalization (LN) (Ba et al., 2016) normalizes the entire feature map, maintaining the differences between the channels, we use LN in the residual blocks for stable training.
Discriminator. Following StarGAN (Choi et al., 2018), the discriminator D is composed of six strided-convolutional layers, followed by the standard discriminator and the auxiliary classifier.
7.2 TRAINING DETAILS
We utilize the Adam optimizer (Kingma & Ba, 2015) with β1 = 0.5 and β2 = 0.999. Following the state-of-the-art approach (Choi et al., 2018) in multi-attribute translation, we load the data with a horizontal flip with 0.5 percent. For stable training, we update {Ec, Efs , Ebs, G} in every five updates of D (Gulrajani et al., 2017). We initialize the weights of D from a normal distribution and apply the initialization (He et al., 2015) on others. Also, we use a batch size of eight and the learning rate of 0.0001. We linearly decay the learning rate by half in every 10,000 iterations from 100,000 iterations. All the models used in the experiments are trained for 200,000 iterations using a single NVIDIA TITAN Xp GPU for 30 hours each. | 1. How does the proposed method handle diverse image-to-image translations, especially when different regions require distinct treatments?
2. Can you explain the co-segmentation network's functionality and why it focuses on specific parts of the image? How is the training process, and what objective function does it use? Would a semantic segmentation model be more suitable?
3. Could you elaborate on the concept of domain-invariant content codes and style codes? Are there any guiding principles for creating these codes? How easily can the model adapt to new styles for image translation?
4. What does the pink color represent in certain heatmap images (bottom-left or top-right) in Figure 2? Please provide a reference or explanation for the color palette.
5. In Figure 5, why do similar dark patterns appear on the mouth? Is this an interactive transfer manual manipulation or an inherent aspect of the method?
6. Although it's commendable that the authors share their code and models, why does the GitHub page reveal author information? Additionally, why doesn't the provided IPYNB file contain useful examples, leaving the reviewer feeling misled? | Review | Review
Summary--
The paper tries to address an issue existing in current image-to-image translation at the point that different regions of the image should be treated differently. In other word, background should not be transferred while only foreground of interest should be transferred. The paper propose to use co-segmentation to find the common areas to for image translation. It reports the proposed method works through experiments.
There are several major concerns to be addressed before considering to publish.
1) The paper says that "For example, in a person’s facial image translation, if the exemplar image has two attributes, (1) a smiling expression and (2) a blonde hair, then both attributes have to be transferred with no other options", but the model in the paper seems still incapable of transferring only one attribute. Perhaps an interactive transfer make more sense, while co-segmentation does not distinguish the part of interest to the user. Or training a semantic segmentation make more sense as the semantic segment can specify which region to transfer.
2) As co-segmentation is proposed to "capture the regions of a common object existing in multiple input images", why does the co-segmentation network only capture the eye and mouth part in Figure 2 and 3, why does it capture the mouth of different shape and style in the third macro column in Figure 4 instead of eyes? How to train the co-segmentation module, what is the objective function? Why not using a semantic segmentation model?
3) The "domain-invariant content code" and the "style code" seem rather subjective. Are there any principles to design content and style codes? In the experiments, it seems the paper considers five styles to transfer as shown in Table 1. Is the model easy to extend to novel styles for image translation?
4) What does the pink color mean in the very bottom-left or top-right heatmap images in Figure 2? There is no pink color reference in the colorbar.
5) Figure 5: Why there is similariy dark patterns on the mouth? Is it some manual manipulation for interactive transfer?
6) Though it is always good to see the authors are willing to release code and models, it appears uncomfortable that github page noted in the abstract reveals the author information. Moreover, in the github page,
even though it says "an example is example.ipynb", the only ipynb file contains nothing informative and this makes reviewers feel cheated.
Minor--
There are several typos, e.g., lightinig. |
ICLR | Title
Local Image-to-Image Translation via Pixel-wise Highway Adaptive Instance Normalization
Abstract
Recently, image-to-image translation has seen a significant success. Among many approaches, image translation based on an exemplar image, which contains the target style information, has been popular, owing to its capability to handle multimodality as well as its suitability for practical use. However, most of the existing methods extract the style information from an entire exemplar and apply it to the entire input image, which introduces excessive image translation in irrelevant image regions. In response, this paper proposes a novel approach that jointly extracts out the local masks of the input image and the exemplar as targeted regions to be involved for image translation. In particular, the main novelty of our model lies in (1) co-segmentation networks for local mask generation and (2) the local mask-based highway adaptive instance normalization technique. We demonstrate the quantitative and the qualitative evaluation results to show the advantages of our proposed approach. Finally, the code is available at https://github.com/AnonymousIclrAuthor/ Highway-Adaptive-Instance-Normalization.
1 INTRODUCTION
Unpaired image-to-image translation (or in short, image translation) based on generative adversarial networks (Goodfellow et al., 2014) aims to transform an input image from one domain to another, without using paired data between different domains (Zhu et al., 2017a; Liu et al., 2017; Kim et al., 2017; Choi et al., 2018; Liu et al., 2017). An unpaired setting, however, is inherently multimodal, denoting a single input image can be mapped to multiple different outputs within a target domain. For example, when translating the hair color of a given image into a blonde, the detailed region (e.g., upper vs. lower, and partial vs. entire) and color (e.g., golden, platinum, and silver) may vary.
Previous studies have achieved such multimodal outputs by adding a random noise sampled from a pre-defined prior distribution (Zhu et al., 2017b) or taking a user-selected exemplar image as additional input, which contains the detailed information of an intended target style (Chang et al., 2018). Recent studies (Lin et al., 2018; Ma et al., 2018) including MUNIT (Huang et al., 2018) and DRIT (Lee et al., 2018) combine those two approaches, showing the state-of-the-art performance by separating (i.e., disentangling) content and style information of a given image through two different encoder networks. However, existing exemplar-based image translation method has several limitations as follows. First, the style information is typically extracted and encoded from the entire region of a given exemplar, thus being potentially noisy due to those regions involved with respect to the target attribute to transfer. Suppose we translate the hair color of an image using an exemplar image. Since the hair color information is available only in the hair region of an image, the style information extracted from the entire region of the exemplar may contain the irrelevant information (e.g., color of the wall and edge pattern of the floor), which should not be reflected in the intended image translation.
On the other hand, the extracted style is then applied to the entire region of the target image, even though particular regions should be kept as it is. Due to this limitation, some of the previous approaches (Huang et al., 2018; Lee et al., 2018) often distort irrelevant regions of an input image such as the background.
Furthermore, when multiple attributes are involved in an exemplar image, one has no choice but to impose all of them when translating a given image. For example, in a person’s facial image translation, if the exemplar image has two attributes, (1) a smiling expression and (2) a blonde hair, then both attributes have to be transferred with no other options.
To tackle these issues, we propose a novel, LOcal Mask-based Image Translation approach, called LOMIT, which jointly generates a local, pixel-wise soft binary mask of an exemplar (i.e., the source region from which to extract out the style information) and that of an input image to translate (i.e., the target region to which to apply the extracted style). This approach has something in common with those recent approaches that have attempted to leverage an attention mask in image translation (Pumarola et al., 2018; Chen et al., 2018; Yang et al., 2018; Ma et al., 2018; Mejjati et al., 2018). In most approaches, the attention mask (extracted from an input image) plays a role of determining the target region to apply a translation. Note that we expand those approaches by jointly extracting two masks, from an input and an exemplar image, respectively, acting as the attention mask of an input and a relevant region (foreground) extractor of an exemplar.
The main novelty of LOMIT is that to jointly obtain the local masks of two images, we utilize the cosegmentation networks (Rother et al., 2006), which aim (1) to extract the targeted style information without noise introduced from irrelevant regions and (2) to translate only the necessary region of a target image while minimizing its distortion. While co-segmentation approaches were originally proposed to capture the regions of a common object existing in multiple input images (Rother et al., 2006; Li et al., 2018), we adopt and train co-segmentation networks for our own purpose.
Once obtained local masks, LOMIT extends a recently proposed technique for image translation, called adaptive instance normalization, using highway networks (Srivastava et al., 2015), which computes the weighted average of the input and the translated pixel values using the abovementioned pixel-wise local mask values as different linear combination weights per pixel location. LOMIT has an additional advantage of being able to manipulate the computed masks to selectively transfer an intended style, e.g., choosing either a hair region (to transfer the hair color) or a facial region (to transfer the facial expression).
The effectiveness of LOMIT is evaluated on two facial datasets, via a user study and other quantitative measures such as the inception score and the classification accuracy.
2 BASIC SETUP
We define “content” as common features (an underlying structure) across all domains (e.g., the pose of a face, the location and the shape of eyes, a nose, a mouth, and hair), and “style” as a representation of the structure (e.g., background color, facial expression, skin tone, and hair color).
As shown in Fig. 1, we assume that an image x can be represented as x = c⊕ s, where c is a content code in a content space, and s is a style code in a style space. The operator⊕ combines and converts the content code c and the style code s into a complete image x.
By considering the local mask indicating the relevant region (or simply, the foreground) to extract the style from or to apply it to, we further assume that s is decomposed into s = sf ⊕ sb, where sf is the style code extracted from the foreground region and sb is that from the background region. Separating an integrated style space S into a foreground style space Sf and a background style space Sb play a role of disentangling style feature representation1. The pixel-wise soft binary mask m of an image x is represented as a matrix with the same spatial resolution of x. Each entry of m lies between 0 and 1, which indicates the degree of the corresponding pixel belonging to the foreground. Then, the local foreground/background regions xf /xb of x is obtained as
xf = m x, xb = (1−m) x, (1)
where is an element-wise multiplication. Finally, our assumption is extended to x = c⊕ sf ⊕ sb, where c, sf , and sb are obtained by the content encoder Ec:, the foreground style encoder Efs , and the background style encoder Ebs , respectively, which are all shared across multiple domains in LOMIT, i.e.,
{cx, sfx, sbx} = {Ec(x), Efs (xf ), Ebs(xb)} cx ∈ C, sfx ∈ Sf , sbx ∈ Sb (2)
It is critical in LOMIT to properly learn to generate the local mask involved in image translation. To this end, we propose to combine the mask generation networks with our novel highway adaptive instance normalization, as will be described in Section 3.2.
1 To verify the representations are properly disentangled, we refer the readers to the 2D embedding visualization of each space (C,Sf ,Sb) in Fig. 8 in in the appendix.
3 LOCAL IMAGE TRANSLATION MODEL
We first denote x1 ∈ X1 and x2 ∈ X2 as images from domains X1 and X2, respectively. As shown in Fig. 2, LOMIT converts a given image x1 to X2 and vice versa, i.e., x1→2 = G(h(Ec(x1), E f s (x f 2 ), E b s(x b 1))), and x2→1 =G(h(Ec(x2), E f s (x f 1 ), E b s(x b 2))), where G is decoder networks and h is our proposed, local mask-based highway adaptive instance normalization (or in short, HAdaIN), as will be described in detail in Section 3.2.
For a brevity purpose, we omit the domain index notation in, say,m = {m1,m2} and x = {x1, x2}, unless needed for clarification.
3.1 LOCAL MASK EXTRACTION
LOMIT utilizes the local mask m to separate the image x into the foreground and the background regions, xf and xb. That is, we jointly extract the local masks of the input and the exemplar images, as those effectively involved in image translation, via co-segmentation networks. For example, given the input image and the exemplar, if LOMIT identifies the the hair color difference of a facial image, e.g., blonde vs. black, then, our local masks should be obtained as the hair regions from two images.
As shown in Fig. 3, given two images x1 and x2, the co-segmentation networks first encode the content of each as c1 and c2 via the content encoder Ec. Next, in the case of computing the segmentation of x2, after average-pooling c1 globally, we forward it to an MLP to obtain the channel-wise soft binary mask cattn1 , which is then multiplied with c2 in a channel-wise manner, i.e., c attn 1 c2, where cattn1 = σ(MLP(c1)). This step works as transferring the objects information from x1 to x2. Finally, we forward-propagate this output into the attention network A to obtain the local mask m2 of x2, i.e., m2 = A(cattn1 c2). The same process applies to the opposite case in a similar manner, resulting in m1 = A(cattn2 c1). Note that our co-segmentation networks are trained in an end-to-end manner with no direct supervision.
3.2 HIGHWAY ADAPTIVE INSTANCE NORMALIZATION
Adaptive instance normalization is an effective style transfer technique (Huang & Belongie, 2017). Generally, it matches the channel-wise statistics, e.g., the mean and the variance, of the activation map of an input image with those of a style image. In the context of image translation, MUNIT (Huang et al., 2018) extends AdaIN in a way that the target mean and the variance are computed
as the outputs of the trainable functions β and γ of a given style code, i.e., AdaINx1→x2(c1, s2) = γ(s2) ( c1 − µ(c1) σ(c1) ) + β(s2), (3)
As we pointed out earlier, such a transformation is applied globally over the entire region of an image, which may unnecessarily distort irrelevant regions. Hence, we formulate our local maskbased highway AdaIN (HAdaIN) as
HAdaINx1→x2(m1, c1, s f 2 , s b 1) = m1 AdaINx1→x2(c1, s f 2 ) + (1−m1) AdaINx1→x1(c1, sb1), (4) where each of β and γ in Eq. (3) is defined as a multi-layer perceptron (MLP), i.e., [β(sf ); γ(sf )] = MLPf (sf ) and [β(sb); γ(sb)] = MLPb(sb). Note that we use different MLPs for the foreground and the background style code inputs. The first term of Eq. (4) corresponds to the local region of an input image translated by the foreground style, while the second corresponds to the complementary region where the original style of the input is kept as it is.
4 TRAINING OBJECTIVES
This section describes each of our loss terms in the objective function used for training our model.
4.1 STYLE AND CONTENT RECONSTRUCTION LOSS
The foreground style of the translated output should be close to that of the exemplar, while the background style of the translated output should be close to that of the original input image. We formulate this criteria as the following style reconstruction loss terms:
L1→2sf = Exf1→2,xf2 [‖E f s (x f 1→2)− Efs (x f 2 )‖1] (5) L1→2sb = Exb1→2,xb1 [‖E b s(x b 1→2)− Ebs(xb1)‖1]. (6)
From the perspective of content information, the content feature of an input image should be consistent with its translated output, which is represented as the content reconstruction loss as L1→2c = Ex1→2,x1 [‖Ec(x1→2)− Ec(x1)‖1]. (7) Note that the content reconstruction is imposed across the entire region of the input image, regardless of the local mask.
4.2 IMAGE RECONSTRUCTION LOSS
As an effective supervision approach in an unpaired image translation setting, we adopt the imagelevel cyclic consistency loss (Zhu et al., 2017a) between an input image and its output through two consecutive image translations of X1 → X2 → X1 (or X2 → X1 → X2), i.e.,
L1→2→1cyc = Ex1 [‖x1→2→1 − x1‖1] . (8)
Meanwhile, similar to previous studies (Huang et al., 2018; Lee et al., 2018), we translate not only (x1 → x1→2) but also (x1 → x1→1). This intra-domain translation (x1 → x1→1) should work similarly to auto-encoder (Larsen et al., 2016), and the corresponding loss term is written as
L1→1x = Ex1 [‖x1→1 − x1‖1] (9)
4.3 DOMAIN ADVERSARIAL LOSS
To approximate the real-data distribution via our model, we adopt the domain adversarial loss by introducing the discriminator networks Dsrc. Among the loss terms proposed in the original GAN(Goodfellow et al., 2014), LSGAN(Mao et al., 2017), and WGAN-GP(Arjovsky et al., 2017; Gulrajani et al., 2017), we chose WGAN-GP, which is shown to empirically work best, as an adversarial method. That is, our adversarial loss is written as
L1→2adv = Ex1 [Dsrc(x1)]− Ex1→2 [Dsrc(x1→2)]− λgp Ex̂[(‖∇x̂Dsrc(x̂)‖2 − 1)2], (10) where x1→2 = G(h(c1, s f 2 , s b 1)), x̂ is a sampled value from the uniform distribution, and λgp = 10. Also, we apply the loss proposed in patchGAN (Isola et al., 2017; Zhu et al., 2017a).
4.4 MULTI-ATTRIBUTE TRANSLATION LOSS
we use an auxiliary classifier (Odena et al., 2016) to cover multi-attribute translation with a single shared model, similar to StarGAN (Choi et al., 2018). The auxiliary classifier Dcls, which shares the parameters with the discriminator Dsrc except for the last layer, classifies the domain of a given image. In detail, its loss term is defined as
L1→2clsr = Ex1 [− logDcls(yx1 |x1)] (11) L1→2clsf = Ex1→2 [− logDcls(yx2 |x1→2)] , (12)
where yx is the domain label of an input image x. Similar to the concept of weakly supervised learning (Zhou et al., 2016; Selvaraju et al., 2017), This loss term plays a role of supervising the local mask m to point out the proper region of the corresponding domain y through the HAdaIN module, allowing our model to extract out the style from its proper region of the exemplar.
4.5 MASK REGULARIZATION LOSSES
We impose several additional regularization losses on local mask generation to improve the overall image generation performance as well as the interpretability of the generated mask.
The first regularization is to minimize the difference of the mask values of those pixels that have similar content information. This helps the local mask consistently capture a semantically meaningful region as a whole, e.g., capturing the entire hair region even when the lighting conditions and the hair color vary significantly within the exemplar. In detail, we design this regularization as minimizing
R1 = E ∑ i=1,··· ,W,j=1,··· ,H [ |(m ·~1T )− (~1 ·mT )| (ĉ · ĉT ) ] ij (13) where ~1 is a vector whose elements are all ones, {~1,m} ∈ RWH×1, and ĉ ∈ RWH×C where ĉ = c ‖c‖ . The first term is the distance matrix of all the pairs of pixel-wise mask values in m, and the second term is the cosine similarity matrix of all the pairs of C-dimensional pixel-wise content vectors. Note that we backpropagate the gradients generated by this regularization term only through m to train the co-segmentation networks, but not through ĉ, which does not affect the encoder E.
The second regularization is to make the local masks of the two images capture only those regions having contrasting styles. This regularization is useful especially when multiple attributes are involved in image translation. For example, if the two facial images have different hair colors but common facial expressions, then the local mask should indicate only the hair region. We formulate this regularization by maximizing the style difference between the local mask of two images, which is written as
R2 = −E [ ‖sf1 − s f 2‖1 ]
(14)
The third regularization is simply to minimize the local mask region (Chen et al., 2018; Pumarola et al., 2018) to encourage the model to focus only on a necessary region involved in image translation, by minimizing
R3 = E ‖m‖1 (15)
4.6 FULL LOSS
Finally, our full loss is defined as
LD = −Ladv + λclsLclsr , LG = Ladv + λclsLclsf + λs,c(Lsf + Lsb + Lc) + λx(Lcyc + L1→1x + L2→2x )
+λ1R1 + λ2R2 + λ3R3, (16)
where L without superscript denotes (1→ 2, 2→ 1), λcls = 1, λs,c = 1, λx = 10, λ1 = 0.1, λ2 = 0.01, and λ3 = 0.0001. Note that our training process contains both intra-domain translation, (x1 → x1→1 and x2 → x2→2), and inter-domain translation, (x1 → x1→2 and x2 → x2→1).
5 EXPERIMENTS
We conduct LOMIT and other baseline models on two facial datasets, CelebA (Liu et al., 2015) and EmotioNet (Fabian Benitez-Quiroz et al., 2016). We first describe the datasets and the baseline models in subsection 5.1, 5.2. Second, we present the qualitative comparisons of both multi- and single-attribute translation results against baseline methods in subsection 5.3. Third, we report the user study results to validate the human-perceived quality of the translated results in subsection 5.4. Lastly, we evaluate the performances of LOMIT using the inception score (Salimans et al., 2016) and the classification accuracy. The model architecture and training details we set in the experiments are described in appendix (subsection 7.1, 7.2).
5.1 DATASETS
CelebA. The CelebA (Liu et al., 2015) dataset consists of 202,599 face images of celebrities and 40 attribute annotations per image. We pick 10 attributes (i.e., black hair, blond hair, brown hair, smiling, goatee, mustache, no beard, male, heavy makeup, wearing lipstick) that would convey meaningful local masks. We randomly select 2,000 images for testing and use the others for training. Images are center-cropped and scaled down to 128×128.
EmotioNet. The EmotioNet (Fabian Benitez-Quiroz et al., 2016) dataset contains 975,000 images of facial expressions in the wild, each annotated with 12 action units (AUs). Each AU denotes an activation of a specific facial muscle (e.g., jaw drop, nose wrinkler). We crop each image using a face detector 2 and resize them to 128×128. We use 2,000 images for testing and 200,000 images for training.
5.2 BASELINE METHODS
MUNIT. MUNIT (Huang et al., 2018) decomposes an image into the domain-invariant content code and the domain-specific style code. Involving random sampling for latent style codes while
2https://github.com/ageitgey/face_recognition
training, MUNIT attempts to reflect the multimodal nature of various style domains. We implement MUNIT to be trained on CelebA (Liu et al., 2015) dataset and report results for our comparison.
DRIT. DRIT (Lee et al., 2018) employs two encoders of which each extracts the domain-invariant content information and domain-specific style information, respectively. The model is trained leveraging a content discriminator that ensures the content space to be shared. Loss functions and training processes are similar to MUNIT.
5.3 QUALITATIVE RESULTS
CelebA. As shown in Fig. 4, we compare our model with the baseline models using CelebA dataset (Liu et al., 2015). The baseline models are trained with a dataset corresponding to each target attribute at the topmost column (e.g., in the gender case, a dataset is divided into male/female). On the other hand, when training LOMIT, we construct a set of multiple attributes by combining a few different attributes and train the model for multi-attribute translation (i.e., the columns in a black border, (a), (b), and (c) in Fig. 4). Meanwhile, in order to conduct the single-attribute translation, we interactively modify the output masks of the co-segmentation module and forward the manipulated masks into the networks. That is, as illustrated in Figs. 4(a’), (b’), and (c’), we manually remove the mask for both an input and an exemplar and obtain the result for single-attribute translation. Note that the area marked as a red rectangle in the mask indicates the removed area.
The first four rows correspond to the input images, their generated masks, the exemplars, and their generated masks. Each of the last three rows provides comparisons of our model against the baselines. The topmost row indicates the target attribute for each column. Those in black denote multiple attributes while those in red represent a single attribute after removing the mask subregion (a red rectangle) corresponding to the other attribute. We denote Facial Hair when belonging to any of the three classes, Beard, Goatee or Mustache. LOMIT tends to keep the background intact across various classes and apply the style to the appropriate region, while transferring the properly extracted style (attribute) from the exemplar. When compared to LOMIT, the two other models suffer from undesirable distortion in the background as shown in the first and the second rows from the bottom of (a), (a’). Meanwhile, as can be seen in the bottommost row of (e), MUNIT fails to apply the style to the suitable region due to the lack of an attention mask (through the highway networks). The images of DRIT in the columns (e’) show a translation through the improperly extracted style because the hair region on the images contains a white color which seems to be referenced from the shoulder of a person in the exemplar. It indicates that a mask for the exemplar should be properly incorporated in the process. From the comparison with the baseline models, we justify the needs of the local masks and the HAdaIN module of LOMIT. EmotioNet. Fig. 5 shows the results for AU translation. For the training, we use each AU (1, 2, 4, 5, 6, 9, 12, 17, 20, 25, 26, and 43) as a label for the multi-attribute translation loss, so that the model can be trained for translating multi-AUs from the exemplar. Each section is composed of an input image, its mask, an exemplar, its mask, and a translated output. For example, the left top part, the input containing AUs 1, 4, 25 (Expressionless) takes the exemplar whose AUs are 6, 12, 25 (Happy).
5.4 QUANTITATIVE EVALUATION
DRIT MUNIT LOMIT Class IS CA IS CA IS CA
Mean Std (%) Mean Std (%) Mean Std (%)
Facial Hair 0.3295 0.24 23.8 0.2808 0.25 60.0 0.3105 0.27 71.4 Gender 0.2703 0.21 21.1 0.1368 0.19 53.0 0.2348 0.22 83.9
Wearing Lipstick 0.2685 0.20 19.9 0.1751 0.20 57.1 0.2528 0.20 73.7 Facial Hair + Gender 0.3805 0.24 14.3 0.1972 0.25 34.9 0.2069 0.23 68.1
Makeup + Wearing Lipstick 0.2853 0.21 16.1 0.2472 0.24 27.3 0.2834 0.22 72.8
Inception score and classification accuracy. We compare LOMIT with the baselines using inception score (Salimans et al., 2016) (IS) and classification accuracy (CA). IS is high if translated images are diverse and high quality. We follow the procedure in MUNIT (Huang et al., 2018) to obtain IS. For the classification, we use the pretrained Inception-v3 (Szegedy et al., 2016) and finetune with CelebA (Liu et al., 2015) dataset. To be classified well with high accuracy, a translated image must have appropriate attribute in the exemplar. Table 1 lists up the resulting scores and accuracies. In terms of the CA, LOMIT achieves the highest accuracy across all classes evaluated by large margins. DRIT achieves slightly higher IS than LOMIT, but in the cost of the CA. It indicates that DRIT produces diverse outputs, however with less recognizable image outputs.
6 CONCLUSIONS
In this work, we proposed a local mask-based image-to-image translation model called LOMIT. The co-segmentation networks jointly generate the mask of an input image and that of an exemplar, respectively. A mask of the exemplar exclude an irrelevant region to extract the style information from a relevant region. The other mask of an input captures the region to apply the style of an exemplar while maintaining an original style in the rest (through our highway adaptive instance normalization). LOMIT achieves outstanding results compared with the state-of-the-art methods (Huang et al., 2018; Lee et al., 2018). As future work, we will extend our approach as a general normalization method which can be used in other computer vision tasks.
7 APPENDIX
7.1 MODEL ARCHITECTURES
Content Encoder. Similar to MUNIT (Huang et al., 2018), the content encoderEc is composed of two strided-convolutional layers and four residual blocks (He et al., 2016). Following the previous approaches (Huang & Belongie, 2017; Nam & Kim, 2018), IN is used across all the layers in the content encoder.
Style Encoders. The style encoders Efs , Ebs have the same architecture but with different parameters. They consist of four strided-convolutional layers, a global average pooling layer, and a fullyconnected layer. The style codes sf , sb are eight-dimensional vectors. Also, style encoders Efs , E b s share first few layers as the first few layers detect low-level feature. To maintain the style information, we do not use IN in the style encoders.
Co-segmentation Networks. Co-segmentation networks are composed of six convolutional layers with a batch normalization (Ioffe & Szegedy, 2015). MLP in Fig. 3 has two linear layers with tanh and sigmoid activation functions, respectively.
Decoder. Decoder G has four residual blocks and two convolutional layers with an upsampling layer each. Because the layer normalization (LN) (Ba et al., 2016) normalizes the entire feature map, maintaining the differences between the channels, we use LN in the residual blocks for stable training.
Discriminator. Following StarGAN (Choi et al., 2018), the discriminator D is composed of six strided-convolutional layers, followed by the standard discriminator and the auxiliary classifier.
7.2 TRAINING DETAILS
We utilize the Adam optimizer (Kingma & Ba, 2015) with β1 = 0.5 and β2 = 0.999. Following the state-of-the-art approach (Choi et al., 2018) in multi-attribute translation, we load the data with a horizontal flip with 0.5 percent. For stable training, we update {Ec, Efs , Ebs, G} in every five updates of D (Gulrajani et al., 2017). We initialize the weights of D from a normal distribution and apply the initialization (He et al., 2015) on others. Also, we use a batch size of eight and the learning rate of 0.0001. We linearly decay the learning rate by half in every 10,000 iterations from 100,000 iterations. All the models used in the experiments are trained for 200,000 iterations using a single NVIDIA TITAN Xp GPU for 30 hours each. | 1. What is the main contribution of the paper in image-to-image translation?
2. What are the typical issues that the paper addresses in this field?
3. How does the proposed approach, LOMIT, deal with these issues?
4. Can you explain how LOMIT works, especially its reliance on cosegmentation?
5. Are there any concerns regarding the experimental results, particularly the qualitative ones? | Review | Review
The paper deals with image to image (of faces) translation solving two main typical issues: 1) the style information comes from the entire region of a given exemplar, collecting information from the background too, without properly isolating the face area; 2) the extracted style is applied to the entire region of the target image, even if some parts should be kept unchanged. The approach is called LOMIT, and is very elaborated, with source code which is available (possible infringement of the anonymity, Area Chair please check). In few words, LOMIT lies on a cosegmentation basis, which allows to find semantic correspondences between image regions of the exemplar and the source image. The correspondences are shown as a soft mask, where the user may decide to operate on some parts leaving unchanged the remaining (in the paper is shown for many alternatives: hair, eyes, mouth). Technically, the paper assembles other state of the art techniques, (cosegmentation networks, adaptive instance normalization via highway networks) but it does it nicely. The major job in the paper lies in the regularization part, where the authors specify each of their adds in a proper way. Experiments are nice, since for one of the first times provide facial images which are pleasant to see. One thing I did not like were on the three set of final qualitative results, where gender change results in images which are obviously diverse wrt the source one, but after a while are not communicating any newer thing. Should have been better to explore other attributes combo. |
ICLR | Title
Local Image-to-Image Translation via Pixel-wise Highway Adaptive Instance Normalization
Abstract
Recently, image-to-image translation has seen a significant success. Among many approaches, image translation based on an exemplar image, which contains the target style information, has been popular, owing to its capability to handle multimodality as well as its suitability for practical use. However, most of the existing methods extract the style information from an entire exemplar and apply it to the entire input image, which introduces excessive image translation in irrelevant image regions. In response, this paper proposes a novel approach that jointly extracts out the local masks of the input image and the exemplar as targeted regions to be involved for image translation. In particular, the main novelty of our model lies in (1) co-segmentation networks for local mask generation and (2) the local mask-based highway adaptive instance normalization technique. We demonstrate the quantitative and the qualitative evaluation results to show the advantages of our proposed approach. Finally, the code is available at https://github.com/AnonymousIclrAuthor/ Highway-Adaptive-Instance-Normalization.
1 INTRODUCTION
Unpaired image-to-image translation (or in short, image translation) based on generative adversarial networks (Goodfellow et al., 2014) aims to transform an input image from one domain to another, without using paired data between different domains (Zhu et al., 2017a; Liu et al., 2017; Kim et al., 2017; Choi et al., 2018; Liu et al., 2017). An unpaired setting, however, is inherently multimodal, denoting a single input image can be mapped to multiple different outputs within a target domain. For example, when translating the hair color of a given image into a blonde, the detailed region (e.g., upper vs. lower, and partial vs. entire) and color (e.g., golden, platinum, and silver) may vary.
Previous studies have achieved such multimodal outputs by adding a random noise sampled from a pre-defined prior distribution (Zhu et al., 2017b) or taking a user-selected exemplar image as additional input, which contains the detailed information of an intended target style (Chang et al., 2018). Recent studies (Lin et al., 2018; Ma et al., 2018) including MUNIT (Huang et al., 2018) and DRIT (Lee et al., 2018) combine those two approaches, showing the state-of-the-art performance by separating (i.e., disentangling) content and style information of a given image through two different encoder networks. However, existing exemplar-based image translation method has several limitations as follows. First, the style information is typically extracted and encoded from the entire region of a given exemplar, thus being potentially noisy due to those regions involved with respect to the target attribute to transfer. Suppose we translate the hair color of an image using an exemplar image. Since the hair color information is available only in the hair region of an image, the style information extracted from the entire region of the exemplar may contain the irrelevant information (e.g., color of the wall and edge pattern of the floor), which should not be reflected in the intended image translation.
On the other hand, the extracted style is then applied to the entire region of the target image, even though particular regions should be kept as it is. Due to this limitation, some of the previous approaches (Huang et al., 2018; Lee et al., 2018) often distort irrelevant regions of an input image such as the background.
Furthermore, when multiple attributes are involved in an exemplar image, one has no choice but to impose all of them when translating a given image. For example, in a person’s facial image translation, if the exemplar image has two attributes, (1) a smiling expression and (2) a blonde hair, then both attributes have to be transferred with no other options.
To tackle these issues, we propose a novel, LOcal Mask-based Image Translation approach, called LOMIT, which jointly generates a local, pixel-wise soft binary mask of an exemplar (i.e., the source region from which to extract out the style information) and that of an input image to translate (i.e., the target region to which to apply the extracted style). This approach has something in common with those recent approaches that have attempted to leverage an attention mask in image translation (Pumarola et al., 2018; Chen et al., 2018; Yang et al., 2018; Ma et al., 2018; Mejjati et al., 2018). In most approaches, the attention mask (extracted from an input image) plays a role of determining the target region to apply a translation. Note that we expand those approaches by jointly extracting two masks, from an input and an exemplar image, respectively, acting as the attention mask of an input and a relevant region (foreground) extractor of an exemplar.
The main novelty of LOMIT is that to jointly obtain the local masks of two images, we utilize the cosegmentation networks (Rother et al., 2006), which aim (1) to extract the targeted style information without noise introduced from irrelevant regions and (2) to translate only the necessary region of a target image while minimizing its distortion. While co-segmentation approaches were originally proposed to capture the regions of a common object existing in multiple input images (Rother et al., 2006; Li et al., 2018), we adopt and train co-segmentation networks for our own purpose.
Once obtained local masks, LOMIT extends a recently proposed technique for image translation, called adaptive instance normalization, using highway networks (Srivastava et al., 2015), which computes the weighted average of the input and the translated pixel values using the abovementioned pixel-wise local mask values as different linear combination weights per pixel location. LOMIT has an additional advantage of being able to manipulate the computed masks to selectively transfer an intended style, e.g., choosing either a hair region (to transfer the hair color) or a facial region (to transfer the facial expression).
The effectiveness of LOMIT is evaluated on two facial datasets, via a user study and other quantitative measures such as the inception score and the classification accuracy.
2 BASIC SETUP
We define “content” as common features (an underlying structure) across all domains (e.g., the pose of a face, the location and the shape of eyes, a nose, a mouth, and hair), and “style” as a representation of the structure (e.g., background color, facial expression, skin tone, and hair color).
As shown in Fig. 1, we assume that an image x can be represented as x = c⊕ s, where c is a content code in a content space, and s is a style code in a style space. The operator⊕ combines and converts the content code c and the style code s into a complete image x.
By considering the local mask indicating the relevant region (or simply, the foreground) to extract the style from or to apply it to, we further assume that s is decomposed into s = sf ⊕ sb, where sf is the style code extracted from the foreground region and sb is that from the background region. Separating an integrated style space S into a foreground style space Sf and a background style space Sb play a role of disentangling style feature representation1. The pixel-wise soft binary mask m of an image x is represented as a matrix with the same spatial resolution of x. Each entry of m lies between 0 and 1, which indicates the degree of the corresponding pixel belonging to the foreground. Then, the local foreground/background regions xf /xb of x is obtained as
xf = m x, xb = (1−m) x, (1)
where is an element-wise multiplication. Finally, our assumption is extended to x = c⊕ sf ⊕ sb, where c, sf , and sb are obtained by the content encoder Ec:, the foreground style encoder Efs , and the background style encoder Ebs , respectively, which are all shared across multiple domains in LOMIT, i.e.,
{cx, sfx, sbx} = {Ec(x), Efs (xf ), Ebs(xb)} cx ∈ C, sfx ∈ Sf , sbx ∈ Sb (2)
It is critical in LOMIT to properly learn to generate the local mask involved in image translation. To this end, we propose to combine the mask generation networks with our novel highway adaptive instance normalization, as will be described in Section 3.2.
1 To verify the representations are properly disentangled, we refer the readers to the 2D embedding visualization of each space (C,Sf ,Sb) in Fig. 8 in in the appendix.
3 LOCAL IMAGE TRANSLATION MODEL
We first denote x1 ∈ X1 and x2 ∈ X2 as images from domains X1 and X2, respectively. As shown in Fig. 2, LOMIT converts a given image x1 to X2 and vice versa, i.e., x1→2 = G(h(Ec(x1), E f s (x f 2 ), E b s(x b 1))), and x2→1 =G(h(Ec(x2), E f s (x f 1 ), E b s(x b 2))), where G is decoder networks and h is our proposed, local mask-based highway adaptive instance normalization (or in short, HAdaIN), as will be described in detail in Section 3.2.
For a brevity purpose, we omit the domain index notation in, say,m = {m1,m2} and x = {x1, x2}, unless needed for clarification.
3.1 LOCAL MASK EXTRACTION
LOMIT utilizes the local mask m to separate the image x into the foreground and the background regions, xf and xb. That is, we jointly extract the local masks of the input and the exemplar images, as those effectively involved in image translation, via co-segmentation networks. For example, given the input image and the exemplar, if LOMIT identifies the the hair color difference of a facial image, e.g., blonde vs. black, then, our local masks should be obtained as the hair regions from two images.
As shown in Fig. 3, given two images x1 and x2, the co-segmentation networks first encode the content of each as c1 and c2 via the content encoder Ec. Next, in the case of computing the segmentation of x2, after average-pooling c1 globally, we forward it to an MLP to obtain the channel-wise soft binary mask cattn1 , which is then multiplied with c2 in a channel-wise manner, i.e., c attn 1 c2, where cattn1 = σ(MLP(c1)). This step works as transferring the objects information from x1 to x2. Finally, we forward-propagate this output into the attention network A to obtain the local mask m2 of x2, i.e., m2 = A(cattn1 c2). The same process applies to the opposite case in a similar manner, resulting in m1 = A(cattn2 c1). Note that our co-segmentation networks are trained in an end-to-end manner with no direct supervision.
3.2 HIGHWAY ADAPTIVE INSTANCE NORMALIZATION
Adaptive instance normalization is an effective style transfer technique (Huang & Belongie, 2017). Generally, it matches the channel-wise statistics, e.g., the mean and the variance, of the activation map of an input image with those of a style image. In the context of image translation, MUNIT (Huang et al., 2018) extends AdaIN in a way that the target mean and the variance are computed
as the outputs of the trainable functions β and γ of a given style code, i.e., AdaINx1→x2(c1, s2) = γ(s2) ( c1 − µ(c1) σ(c1) ) + β(s2), (3)
As we pointed out earlier, such a transformation is applied globally over the entire region of an image, which may unnecessarily distort irrelevant regions. Hence, we formulate our local maskbased highway AdaIN (HAdaIN) as
HAdaINx1→x2(m1, c1, s f 2 , s b 1) = m1 AdaINx1→x2(c1, s f 2 ) + (1−m1) AdaINx1→x1(c1, sb1), (4) where each of β and γ in Eq. (3) is defined as a multi-layer perceptron (MLP), i.e., [β(sf ); γ(sf )] = MLPf (sf ) and [β(sb); γ(sb)] = MLPb(sb). Note that we use different MLPs for the foreground and the background style code inputs. The first term of Eq. (4) corresponds to the local region of an input image translated by the foreground style, while the second corresponds to the complementary region where the original style of the input is kept as it is.
4 TRAINING OBJECTIVES
This section describes each of our loss terms in the objective function used for training our model.
4.1 STYLE AND CONTENT RECONSTRUCTION LOSS
The foreground style of the translated output should be close to that of the exemplar, while the background style of the translated output should be close to that of the original input image. We formulate this criteria as the following style reconstruction loss terms:
L1→2sf = Exf1→2,xf2 [‖E f s (x f 1→2)− Efs (x f 2 )‖1] (5) L1→2sb = Exb1→2,xb1 [‖E b s(x b 1→2)− Ebs(xb1)‖1]. (6)
From the perspective of content information, the content feature of an input image should be consistent with its translated output, which is represented as the content reconstruction loss as L1→2c = Ex1→2,x1 [‖Ec(x1→2)− Ec(x1)‖1]. (7) Note that the content reconstruction is imposed across the entire region of the input image, regardless of the local mask.
4.2 IMAGE RECONSTRUCTION LOSS
As an effective supervision approach in an unpaired image translation setting, we adopt the imagelevel cyclic consistency loss (Zhu et al., 2017a) between an input image and its output through two consecutive image translations of X1 → X2 → X1 (or X2 → X1 → X2), i.e.,
L1→2→1cyc = Ex1 [‖x1→2→1 − x1‖1] . (8)
Meanwhile, similar to previous studies (Huang et al., 2018; Lee et al., 2018), we translate not only (x1 → x1→2) but also (x1 → x1→1). This intra-domain translation (x1 → x1→1) should work similarly to auto-encoder (Larsen et al., 2016), and the corresponding loss term is written as
L1→1x = Ex1 [‖x1→1 − x1‖1] (9)
4.3 DOMAIN ADVERSARIAL LOSS
To approximate the real-data distribution via our model, we adopt the domain adversarial loss by introducing the discriminator networks Dsrc. Among the loss terms proposed in the original GAN(Goodfellow et al., 2014), LSGAN(Mao et al., 2017), and WGAN-GP(Arjovsky et al., 2017; Gulrajani et al., 2017), we chose WGAN-GP, which is shown to empirically work best, as an adversarial method. That is, our adversarial loss is written as
L1→2adv = Ex1 [Dsrc(x1)]− Ex1→2 [Dsrc(x1→2)]− λgp Ex̂[(‖∇x̂Dsrc(x̂)‖2 − 1)2], (10) where x1→2 = G(h(c1, s f 2 , s b 1)), x̂ is a sampled value from the uniform distribution, and λgp = 10. Also, we apply the loss proposed in patchGAN (Isola et al., 2017; Zhu et al., 2017a).
4.4 MULTI-ATTRIBUTE TRANSLATION LOSS
we use an auxiliary classifier (Odena et al., 2016) to cover multi-attribute translation with a single shared model, similar to StarGAN (Choi et al., 2018). The auxiliary classifier Dcls, which shares the parameters with the discriminator Dsrc except for the last layer, classifies the domain of a given image. In detail, its loss term is defined as
L1→2clsr = Ex1 [− logDcls(yx1 |x1)] (11) L1→2clsf = Ex1→2 [− logDcls(yx2 |x1→2)] , (12)
where yx is the domain label of an input image x. Similar to the concept of weakly supervised learning (Zhou et al., 2016; Selvaraju et al., 2017), This loss term plays a role of supervising the local mask m to point out the proper region of the corresponding domain y through the HAdaIN module, allowing our model to extract out the style from its proper region of the exemplar.
4.5 MASK REGULARIZATION LOSSES
We impose several additional regularization losses on local mask generation to improve the overall image generation performance as well as the interpretability of the generated mask.
The first regularization is to minimize the difference of the mask values of those pixels that have similar content information. This helps the local mask consistently capture a semantically meaningful region as a whole, e.g., capturing the entire hair region even when the lighting conditions and the hair color vary significantly within the exemplar. In detail, we design this regularization as minimizing
R1 = E ∑ i=1,··· ,W,j=1,··· ,H [ |(m ·~1T )− (~1 ·mT )| (ĉ · ĉT ) ] ij (13) where ~1 is a vector whose elements are all ones, {~1,m} ∈ RWH×1, and ĉ ∈ RWH×C where ĉ = c ‖c‖ . The first term is the distance matrix of all the pairs of pixel-wise mask values in m, and the second term is the cosine similarity matrix of all the pairs of C-dimensional pixel-wise content vectors. Note that we backpropagate the gradients generated by this regularization term only through m to train the co-segmentation networks, but not through ĉ, which does not affect the encoder E.
The second regularization is to make the local masks of the two images capture only those regions having contrasting styles. This regularization is useful especially when multiple attributes are involved in image translation. For example, if the two facial images have different hair colors but common facial expressions, then the local mask should indicate only the hair region. We formulate this regularization by maximizing the style difference between the local mask of two images, which is written as
R2 = −E [ ‖sf1 − s f 2‖1 ]
(14)
The third regularization is simply to minimize the local mask region (Chen et al., 2018; Pumarola et al., 2018) to encourage the model to focus only on a necessary region involved in image translation, by minimizing
R3 = E ‖m‖1 (15)
4.6 FULL LOSS
Finally, our full loss is defined as
LD = −Ladv + λclsLclsr , LG = Ladv + λclsLclsf + λs,c(Lsf + Lsb + Lc) + λx(Lcyc + L1→1x + L2→2x )
+λ1R1 + λ2R2 + λ3R3, (16)
where L without superscript denotes (1→ 2, 2→ 1), λcls = 1, λs,c = 1, λx = 10, λ1 = 0.1, λ2 = 0.01, and λ3 = 0.0001. Note that our training process contains both intra-domain translation, (x1 → x1→1 and x2 → x2→2), and inter-domain translation, (x1 → x1→2 and x2 → x2→1).
5 EXPERIMENTS
We conduct LOMIT and other baseline models on two facial datasets, CelebA (Liu et al., 2015) and EmotioNet (Fabian Benitez-Quiroz et al., 2016). We first describe the datasets and the baseline models in subsection 5.1, 5.2. Second, we present the qualitative comparisons of both multi- and single-attribute translation results against baseline methods in subsection 5.3. Third, we report the user study results to validate the human-perceived quality of the translated results in subsection 5.4. Lastly, we evaluate the performances of LOMIT using the inception score (Salimans et al., 2016) and the classification accuracy. The model architecture and training details we set in the experiments are described in appendix (subsection 7.1, 7.2).
5.1 DATASETS
CelebA. The CelebA (Liu et al., 2015) dataset consists of 202,599 face images of celebrities and 40 attribute annotations per image. We pick 10 attributes (i.e., black hair, blond hair, brown hair, smiling, goatee, mustache, no beard, male, heavy makeup, wearing lipstick) that would convey meaningful local masks. We randomly select 2,000 images for testing and use the others for training. Images are center-cropped and scaled down to 128×128.
EmotioNet. The EmotioNet (Fabian Benitez-Quiroz et al., 2016) dataset contains 975,000 images of facial expressions in the wild, each annotated with 12 action units (AUs). Each AU denotes an activation of a specific facial muscle (e.g., jaw drop, nose wrinkler). We crop each image using a face detector 2 and resize them to 128×128. We use 2,000 images for testing and 200,000 images for training.
5.2 BASELINE METHODS
MUNIT. MUNIT (Huang et al., 2018) decomposes an image into the domain-invariant content code and the domain-specific style code. Involving random sampling for latent style codes while
2https://github.com/ageitgey/face_recognition
training, MUNIT attempts to reflect the multimodal nature of various style domains. We implement MUNIT to be trained on CelebA (Liu et al., 2015) dataset and report results for our comparison.
DRIT. DRIT (Lee et al., 2018) employs two encoders of which each extracts the domain-invariant content information and domain-specific style information, respectively. The model is trained leveraging a content discriminator that ensures the content space to be shared. Loss functions and training processes are similar to MUNIT.
5.3 QUALITATIVE RESULTS
CelebA. As shown in Fig. 4, we compare our model with the baseline models using CelebA dataset (Liu et al., 2015). The baseline models are trained with a dataset corresponding to each target attribute at the topmost column (e.g., in the gender case, a dataset is divided into male/female). On the other hand, when training LOMIT, we construct a set of multiple attributes by combining a few different attributes and train the model for multi-attribute translation (i.e., the columns in a black border, (a), (b), and (c) in Fig. 4). Meanwhile, in order to conduct the single-attribute translation, we interactively modify the output masks of the co-segmentation module and forward the manipulated masks into the networks. That is, as illustrated in Figs. 4(a’), (b’), and (c’), we manually remove the mask for both an input and an exemplar and obtain the result for single-attribute translation. Note that the area marked as a red rectangle in the mask indicates the removed area.
The first four rows correspond to the input images, their generated masks, the exemplars, and their generated masks. Each of the last three rows provides comparisons of our model against the baselines. The topmost row indicates the target attribute for each column. Those in black denote multiple attributes while those in red represent a single attribute after removing the mask subregion (a red rectangle) corresponding to the other attribute. We denote Facial Hair when belonging to any of the three classes, Beard, Goatee or Mustache. LOMIT tends to keep the background intact across various classes and apply the style to the appropriate region, while transferring the properly extracted style (attribute) from the exemplar. When compared to LOMIT, the two other models suffer from undesirable distortion in the background as shown in the first and the second rows from the bottom of (a), (a’). Meanwhile, as can be seen in the bottommost row of (e), MUNIT fails to apply the style to the suitable region due to the lack of an attention mask (through the highway networks). The images of DRIT in the columns (e’) show a translation through the improperly extracted style because the hair region on the images contains a white color which seems to be referenced from the shoulder of a person in the exemplar. It indicates that a mask for the exemplar should be properly incorporated in the process. From the comparison with the baseline models, we justify the needs of the local masks and the HAdaIN module of LOMIT. EmotioNet. Fig. 5 shows the results for AU translation. For the training, we use each AU (1, 2, 4, 5, 6, 9, 12, 17, 20, 25, 26, and 43) as a label for the multi-attribute translation loss, so that the model can be trained for translating multi-AUs from the exemplar. Each section is composed of an input image, its mask, an exemplar, its mask, and a translated output. For example, the left top part, the input containing AUs 1, 4, 25 (Expressionless) takes the exemplar whose AUs are 6, 12, 25 (Happy).
5.4 QUANTITATIVE EVALUATION
DRIT MUNIT LOMIT Class IS CA IS CA IS CA
Mean Std (%) Mean Std (%) Mean Std (%)
Facial Hair 0.3295 0.24 23.8 0.2808 0.25 60.0 0.3105 0.27 71.4 Gender 0.2703 0.21 21.1 0.1368 0.19 53.0 0.2348 0.22 83.9
Wearing Lipstick 0.2685 0.20 19.9 0.1751 0.20 57.1 0.2528 0.20 73.7 Facial Hair + Gender 0.3805 0.24 14.3 0.1972 0.25 34.9 0.2069 0.23 68.1
Makeup + Wearing Lipstick 0.2853 0.21 16.1 0.2472 0.24 27.3 0.2834 0.22 72.8
Inception score and classification accuracy. We compare LOMIT with the baselines using inception score (Salimans et al., 2016) (IS) and classification accuracy (CA). IS is high if translated images are diverse and high quality. We follow the procedure in MUNIT (Huang et al., 2018) to obtain IS. For the classification, we use the pretrained Inception-v3 (Szegedy et al., 2016) and finetune with CelebA (Liu et al., 2015) dataset. To be classified well with high accuracy, a translated image must have appropriate attribute in the exemplar. Table 1 lists up the resulting scores and accuracies. In terms of the CA, LOMIT achieves the highest accuracy across all classes evaluated by large margins. DRIT achieves slightly higher IS than LOMIT, but in the cost of the CA. It indicates that DRIT produces diverse outputs, however with less recognizable image outputs.
6 CONCLUSIONS
In this work, we proposed a local mask-based image-to-image translation model called LOMIT. The co-segmentation networks jointly generate the mask of an input image and that of an exemplar, respectively. A mask of the exemplar exclude an irrelevant region to extract the style information from a relevant region. The other mask of an input captures the region to apply the style of an exemplar while maintaining an original style in the rest (through our highway adaptive instance normalization). LOMIT achieves outstanding results compared with the state-of-the-art methods (Huang et al., 2018; Lee et al., 2018). As future work, we will extend our approach as a general normalization method which can be used in other computer vision tasks.
7 APPENDIX
7.1 MODEL ARCHITECTURES
Content Encoder. Similar to MUNIT (Huang et al., 2018), the content encoderEc is composed of two strided-convolutional layers and four residual blocks (He et al., 2016). Following the previous approaches (Huang & Belongie, 2017; Nam & Kim, 2018), IN is used across all the layers in the content encoder.
Style Encoders. The style encoders Efs , Ebs have the same architecture but with different parameters. They consist of four strided-convolutional layers, a global average pooling layer, and a fullyconnected layer. The style codes sf , sb are eight-dimensional vectors. Also, style encoders Efs , E b s share first few layers as the first few layers detect low-level feature. To maintain the style information, we do not use IN in the style encoders.
Co-segmentation Networks. Co-segmentation networks are composed of six convolutional layers with a batch normalization (Ioffe & Szegedy, 2015). MLP in Fig. 3 has two linear layers with tanh and sigmoid activation functions, respectively.
Decoder. Decoder G has four residual blocks and two convolutional layers with an upsampling layer each. Because the layer normalization (LN) (Ba et al., 2016) normalizes the entire feature map, maintaining the differences between the channels, we use LN in the residual blocks for stable training.
Discriminator. Following StarGAN (Choi et al., 2018), the discriminator D is composed of six strided-convolutional layers, followed by the standard discriminator and the auxiliary classifier.
7.2 TRAINING DETAILS
We utilize the Adam optimizer (Kingma & Ba, 2015) with β1 = 0.5 and β2 = 0.999. Following the state-of-the-art approach (Choi et al., 2018) in multi-attribute translation, we load the data with a horizontal flip with 0.5 percent. For stable training, we update {Ec, Efs , Ebs, G} in every five updates of D (Gulrajani et al., 2017). We initialize the weights of D from a normal distribution and apply the initialization (He et al., 2015) on others. Also, we use a batch size of eight and the learning rate of 0.0001. We linearly decay the learning rate by half in every 10,000 iterations from 100,000 iterations. All the models used in the experiments are trained for 200,000 iterations using a single NVIDIA TITAN Xp GPU for 30 hours each. | 1. What is the main contribution of the paper regarding unpaired image-to-image translation?
2. What are the strengths of the proposed method, particularly in its application of co-segmentation and adaptive instance normalization techniques?
3. What are the weaknesses of the paper, especially regarding its comparisons with other works and experiment settings?
4. How does the reviewer assess the clarity and quality of the paper's content? | Review | Review
This paper proposes an unpaired image-to-image translation method which applies the co-segmentation network and adaptive instance normalization techniques to enable the manipulation on the local regions.
Pros:
* This paper proposes to jointly learn the local mask to make the translation focus on the foreground instead of the whole image.
* The local mask-based highway adaptive instance normalization apply the style information to the local region correctly.
Cons:
* There seems a conflict in the introduction (page 1): the authors clarify that “previous methods [1,2,3] have a drawback of ....” and then clarify that “[1,2,3] have taken a user-selected exemplar image as additional input ...”.
* As the main experiments are about facial attributes translation, I strongly recommend to the author to compare their work with StarGAN [4].
* It is mentioned in the introduction (page 2) that “This approach has something in common with those recent approaches that have attempted to leverage an attention mask in image translation”. However, the differences between the proposed method with these prior works are not compared or mentioned. Some of these works also applied the mask technique or adaptive instance normalization to the image-to-image translation problem. I wonder the advantages of the proposed method compared to these works.
* The experiment setting is not clear enough. If I understand correctly, the face images are divided into two groups based on their attributes (e.g. smile vs no smile). If so, what role does the exemplar image play here? Since the attribute information has been modeled by the network parameters, will different exemplar image lead to different translation outputs?
* The github link for code should not provide any author information.
[1] Multimodal Unsupervised Image-to-Image Translation
[2] Diverse Image-to-Image Translation via Disentangled Representations
[3] Exemplar Guided Unsupervised Image-to-Image Translation
[4] StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation
Overall, I think the proposed method is well-designed but the comparison and experiment setting are not explained well. My initial rating is weakly reject. |
ICLR | Title
Zero-shot detection of daily objects in YCB video dataset
Abstract
To let robots be able to manipulate objects, they have to sense the location of objects. With the development of visual data collecting and processing technology, robots are gradually evolving to localize objects in a greater field of view rather than being limited to a small space where the object could appear. To train such a robot vision system, pictures of all the objects need to be taken under various orientations and illumination. In the traditional manufacturing environment, this is applicable since objects involved in the production process does not change frequently. However, in the vision of smart manufacturing and high-mix-low-volume production, parts and products for robots to handle may change frequently. Thus, it is unrealistic to re-training the vision system for new products and tasks. Under this situation, we discovered the necessity to introduce a hot concept which is zero-shot object detection. Zero-shot object detection is a subset of unsupervised learning, and it aims to detect novel objects in the image with the knowledge learned from and only from seen objects. With zero-shot object detection algorithm, time can be greatly saved from collecting training data and training the vision system. Previous works focus on detecting objects in outdoor scenes, such as bikes, car, people, and dogs. The detection of daily objects is actually more challenging since the knowledge can be learned from each object is very limited. In this work, we explore the zero-shot detection of daily objects in indoor scenes since the objects’ size and environment are closely related to the manufacturing setup. The YCB Video Dataset is used in this work, which contains 21 objects in various categories. To the best of our knowledge, no previous work has explored zero-shot detection in this object size level and on this dataset.
1 INTRODUCTION
Industrial robots have received more and more attention in the manufacturing industry due to the rising cost of human labour and decreasing cost of industrial robots (Carlisle, 2017). Since robots can handle heavy and repetitive jobs better than human, many manufacturing planets have replaced human labours on the production line with robots (Robla-Gómez et al., 2017). In today’s manufacturing pattern of mass production, an industrial robot is only in charge of a certain processing step with dedicated parts. This manufacturing scenario does not require the robot to change target objects to work with frequently. However, with the recent development of control and communication technologies, the manufacturing industry is gradually evolving to high-mix-low-volume production that provides personalized product for customers (Lu et al., 2020). This is also known as smart manufacturing, in this scenario, the manufacturing system will become more flexible. Instead of tied to a specific task, robots will be allocated to various tasks depend on demand. Which requires the robots to be able to recognize a wide range of objects that could be involved during production.
With the development over years, today’s object detection and recognition algorithms such as FasterRCNN (Ren et al., 2015), SSD (Liu et al., 2016), YOLO (Redmon et al., 2016) and EfficientDet(Tan et al., 2020) have reached a high performance. With enough collected training data, those offthe-shelf algorithms can be easily applied to the robots’ vision system to recognize and localize objects involved in the production process. However, the data collection, labelling and training of a neural network is a time-consuming process and requires expertise in the machine vision filed. Even there are data generation method that can generate synthetic images and labels from CAD models
(Wohlhart & Lepetit, 2015), training a new neural network for additional new parts frequently is still not realistic in personalized production.
Zero-shot learning (ZSL) is a learning paradigm that learns knowledge from seen categories and apply the knowledge to new categories in order to recognize objects that is never seen before. In the work carried out by Zhang & Saligrama (2015) and Zhang & Saligrama (2016), zero-shot learning algorithms have already achieved a reasonable accuracy on classifying unseen objects. While zeroshot learning only aim to recognize unseen categories, a more realistic problem called generalized zero-shot detection (gZSL) is proposed by Xian et al. (2017), which aims to recognize both seen and unseen categories. However, gZSL still have flaws and cannot be directly applied to solve the previously mentioned issues. For both ZSL and gZSL, they only focus on recognizing the object in an image. Thus, a big assumption is made before applying those algorithms, which is only one object is appearing in the image and it is always located in the middle of the image. In this setting, ZSL and gZSL algorithms only need to analysis the categorical information in the image but not the location information. In other word, it takes the whole image as one object proposal. In real life, cameras attached on robots are always moving with the robot and cannot make sure the target object is always located in the middle of the camera’s view. Thus, generalized zero-shot detection (gZSD) and gZSL have to be combined to achieve a vision system that can detect and recognize unseen objects. Where gZSD oversees localize seen and unseen objects in the field of view and gZSL oversees the recognition and categorization of seen and unseen objects.
They are existing works in this field try to solve gZSD and gZSL at the same time to make the whole scenario completed. For example, Rahman et al. (2018) tried to combine Faster RCNN with gZSL module to generate object proposals and class predictions. Zhu et al. (2019) worked on YOLOv2, using a single stage detector to generate object proposals. Previous works have all worked on outdoor datasets such as MS COCO (Lin et al., 2014) and Pascal VOC(Everingham et al., 2010). Different objects in those datasets are categorized in categories. For example, man, woman and children are very different in term of their visual appearance but they are all allocated into the ‘people’ category. The varieties of objects in one category can increase the generality of the trained algorithm. In the case of detecting daily objects, datasets such as YCB Video (Xiang et al., 2017) is chosen. 3D objects in this dataset is similar to the size and characteristics of the objects could appear in production pipeline. However, each object is unique and cannot be categorized into categories in 3D object datasets. Which brings the challenge of harder generalization to unseen objects.
Regarding the problems and challenges we found in future manufacturing environment and gZSD, we propose to modify the base version of YOLOv5 (Jocher et al., 2021) to perform gZSL on the YCB Video dataset. Compare to two stage detectors such as Faster RCNN, one stage detectors such as YOLO and SSD are much faster to process. YOLOv5 as the latest version of YOLO series detectors, has been proved to be faster and better than previous versions. Compare to the work done by Zhu et al. (2019) which used a modified YOLOv2 (Redmon & Farhadi, 2017) to perform gZSD, YOLOv5 can output objects proposals in three different levels, thus have better coverage on object sizes. For training and testing our algorithm, four objects out of 21 objects in YCB Video dataset are picked as unseen objects. Any image that contains these four unseen objects will never appear during training but will be used during testing. For every object, their class labeling is translated into attribute vectors that’s contains the colour and shape information of each object. Thus, we transformed the classical single label problem into a multi-label problem to let the neural network learn attribute labels and apply it to unseen objects. It needs to be notice that, in this work, we only work on the gZSD problem but not gZSL. Which mean we only aim to localize seen and unseen object in the images by bounding boxes but not define the class label of the objects.
Our contributions in this paper are in three folds: 1.A novel neural network structure that based on YOLOv5 and able to perform generalized zero-shot detection. The output bounding boxes can be further combined with other gZSL algorithm to achieve full zero-shot object detection and recognition. 2.A novel splitting method for YCB Video dataset that splits the dataset by seen and unseen objects. This splitting can be used for both gZSD and gZSL research that related to daily objects. 3.A novel attribute labelling method for objects in YCB Video dataset. Covert the class labelling to 16 attributes that represents colour and shape information of an object for the neural network to learn.
2 RELATED WORK
2.1 OBJECT DETECTION
Researches on object detection and recognition have been developing rapidly in the past decade. The earliest image classification algorithm can be traced to the work published by Krizhevsky et al. (2012). Since then, the recognition speed and accuracy of image classification algorithms have been improving continuously. These algorithms can be divided into two categories: two-stage detectors and one-stage detectors. Two-stage detectors such as Faster RCNN Ren et al. (2015)and R-FCN Dai et al. (2016)generate object proposals by Region Proposal Network (RPN), then perform object classification based on these proposals. One-stage detectors like (Liu et al., 2016), YOLO (Redmon et al., 2016) and EfficientDet(Tan et al., 2020) generate object proposals and classify objects at the same time by dividing the image into grids. Thus, one-stage detectors’ processing speed is faster than two-stage detectors’ and gained more attention recently.
YOLOv5 (Jocher et al., 2021) used in this paper is the fifth version of the classical YOLO detector. However, there is still an argument in this field about whether this algorithm is qualified to be considered as the fifth version. The maintainer of YOLOv5 also has not published a paper to justify the algorithm’s ability. However, it is tested that on the MS COCO dataset (Lin et al., 2014), YOLOv5’s speed and accuracy both outperformed the state-of-the-art algorithm, which is Google’s EfficientDet. YOLOv5 is composed of three parts: Backbone, Neck and Head. When an image is passed into the network, it is first processed by the DarkNet (Bochkovskiy et al., 2020) backbone. Then passed into the PANet (Wang et al., 2019) neck, which processes and split the feature map into three different feature levels to have better coverage on objects in different sizes. Finally, the YOLO detector head will output predictions in three levels base on the feature maps. In this work, we will use YOLOv5 as the base algorithm, modify the detectors in the head part of the neural network, enable it to detect both seen and unseen objects.
2.2 ZERO-SHOT LEARNING
Given images with class labels, zero-shot learning (ZSL) aims to classify an unseen class based on knowledge learned from seen classes (Fu et al., 2018). ZSL works by learning semantic information from seen classes and reassemble the semantic attributes to predict unseen classes (Zhang & Saligrama, 2016)(Rahman et al., 2018). In other words, the algorithm learns the mapping from the visual domain to the semantic domain during learning and make predictions by mapping from the semantic domain to the visual domain. However, ZSL algorithms are designed for recognizing unseen classes, their performance degrades when both seen and unseen classes need to be recognized.Xian et al. (2017) proposed generalized ZSL (gZSL), which released the constraint of recognition targets and included seen classes as well. However, ZSL and gZSL both assume that only one target exists in an image and it is located right in the middle of the image. They still lack the ability to isolate an object from the background or the occlusion of other objects.
2.3 ZERO-SHOT DETECTION AND RECOGNITION
There are several methods proposed recently to solve the zero-shot detection and recognition problem. These algorithms can not only detect the location of seen and unseen objects in an image but can also classify them. Most of them took the approach with two-stage detectors. In the research carried out by Bansal et al. (2018), Edge-Box was used to generate regional proposals, and Regional Proposal Network (RPN) was used in work done by Rahman et al. (2018). Using two-stage detectors is an easier way to achieve generalized zero-shot detection (gZSL) since these proposal generators do not need to be trained to work with unseen objects. They will generate proposals regardless of the content inside the bounding box. The confidence and class prediction are handled by following gZSL network. However, these methods are inevitably slow due to the workload of class evaluation that comes with a large number of regional proposals. Zhu et al. (2019) proposed to use YOLOv2 as the backbone to detect and classify unseen objects. The use of one-stage detector made an improvement in terms of detection speed compared to two-stage detectors.
The mentioned methods above have all focused on outdoor scenes, and their commonly used datasets are MS COCO (Lin et al., 2014) and Pascal VOC (Everingham et al., 2010). The objects inside these
two datasets have a high variability within each class. For a robot that works in an indoor environment, recognizing objects at class level is not good enough since objects belong to the same category may have very different usage. For example, a housekeeping robot should be able to differentiate the blue cup and the pink cup and hand them to a boy and a girl, respectively, rather than recognize them both as cups. Abdalwhab & Liu (2019) have tried to use SUN RGB-D dataset (Song et al., 2015) to perform gZSD in an indoor environment. However, objects in the SUN RGB-D dataset is labelled by class and in furniture size. In this work, we will use the YCB Video dataset (Xiang et al., 2017), which includes 21 distinctive objects in desktop size. The object size and environment in YCB Video dataset are closer related to the setup we may encounter in the manufacturing environment.
3 METHODOLOGY
3.1 PROBLEM DEFINITION
Assuming we have n objects labelled as gi = {bi,ai}ni . Where the 4-dimensional vector bi = {xi, yi, wi, hi} denotes the ground truth bounding boxes’center point location x, y, width and height w, h. ai is a 16-dimensional vector describes the feature attribute of the object gi. In the training dataset, all objects are all seen objects, represented by gi ∈ Oseen. The valadation dataset is also composed by seen objects only. In the test dataset, both seen Oseen objects and unseen objects Ounseen are present, Oseen∩Ounseen = ∅. The goal of this work is to predict {bpred, cpred,apred} for gpred ∈ Oseen ∪Ounseen. The extra term cpred represents the confidence level of the existance of an object within bounding box bpred.
3.2 NETWORK ARCHITECTURE
The simplified architecture of our YOLOv5-ZS network is shown in Figure 1. The network takes an RGB image as input and output predictions in three feature levels. The input image size is 640*640*3 in our network since all images in the YCB Video dataset are 640*480 pixels. They are padded with grey pixels on top and bottom to become a square shape. The image is first processed by the Backbone (DarkNet) and Neck (PANet). The size of DarkNet and PANet is changeable in YOLOv5. By changing the number of convolutional layers and feature map depth, four versions of YOLOv5can be created, and they are YOLOv5s (small), YOLOv5m (medium), YOLOv5l (large) and YOLOv5x (extra-large). As the network becomes bigger and deeper, their detection accuracy will increase, but the detection speed decreases. In our work, we chose to use YOLOv5s. Tensor Tf represents the block of are extracted features in three levels, they are passed to the following blocks: bounding box and attribute prediction (Tb), feature concatenation (Tc), objectness prediction (Tp) and output concatenation (To). They are detailly explained in the following sections.
3.2.1 FEATURE EXTRACTION
The Tf block is composed of three tensors, and its structure is inherited from PANet. Path Aggregation Network (PANet) proposed by Wang et al. (2019) is a network that has both top-down and bottom-up dataflow. The bi-directional data flow ensures the image features are kept better in the later convolutional layers. By using PANet, we can also get image features in three levels. As Figure 1 shows, the feature tensors are in size 80*80*128, 40*40*256, 20*20*512. As the network goes deeper, the feature map gets smaller and deeper. From this block, each data block will consist of three tensors. However, for easier representation and clearer graph, we represent each later block as a tensor T.
3.2.2 OBJECT LOCALIZATION
The location of bounding boxes is predicted in block Tb. Each tensor in this block has the same depth of 60. In YOLO series detectors, the width and height of the detection layer represent the number of grids on the image. For example, 80*80 means the image is evenly divided into 6400 grids, and each grid is responsible for predicting bounding boxes that the center point falls on this grid. As the grid number becomes smaller, the size of each grid becomes bigger and hence have a better focus on bigger objects. The three different grid sizes thus allow the network to detect objects in various sizes. Each tensor’s depth in this block is 60 since the network needs to make three predictions with anchors in different width/height ratios for each grid. For each prediction, bp is 4 digits and apred is 16 digits. Thus the tensor has 3*(4+16) = 60 channels. It needs to be notice that the bounding box prediction bp = {tx, ty, tw, th} are relative to the location of the grid and size of the anchor. The actual location and size of the bounding box bpred need to be calculated with the following equations.
x = 2 ∗ σ(tx)− 0.5 + cx y = 2 ∗ σ(ty)− 0.5 + cy w = pw ∗ (2 ∗ σ(tw))2 h = ph ∗ (2 ∗ σ(th))2
Where cx, cy indicate the location of the top-left corner of a grid cell and pw, ph indicate the width and height of the anchor.
3.2.3 ATTRIBUTE PREDICTION
As mentioned in the previous section, block Tb also needs to predict the 16-dimensional attribute vector for each object. Unlike other works describing objects with semantic vectors that learn by Word2Vec or FastText, our attribute vector is carried out by human eye evaluation. The elements in the attribute vector are some common colors and shapes that appear in all objects. There are two reasons that we took a different approach. (1) Class names such as “people” in previous works can be easily translated into semantic vectors using existing algorithms. While object names in YCB Video dataset are instance-specific, such as “master chef can”, cannot be directly translated. (2) There is no visual variation for each object, and we can determine the attributes an object contains by human evaluation. The 16-dimensional attribute vector contains: white, blue, red, yellow, silver, black, brown, bottle, cup, can, clamp, slim, circle, cylinder, box, rectangular. Each object gi will be described by several attributes in the form of one-hot embedding. For example, object ‘red cup’ has the attributes of red, cup, circle, cylinder will have the attribute vector [0 0 1 0 0 0 0 0 1 0 0 0 1 1 0 0]. The corresponding location of red, cup, circle, cylinder is labelled ‘1’ and the rest are labelled ‘0’. The predicted output apred is a 16-dimensional vector of floating-point numbers. Each number’s value is between 0 and 1, indicate the confidence level of an attribute.
3.2.4 CONFIDENCE PREDICTION
After bounding box and attribute prediction, block Tf and block Tb are concatenated together to form block Tc. Our objectness prediction layer is learned from the concatenated layer Tc. In the original YOLOv5 detection layer, objectness confidence is learned in the same block as Tb. However, learning objectness confidence only from the feature layer will cause the network to only recognize seen objects and treat all unseen objects as background. Thus, we concatenate the Tf block and Tb block together to let the network also learns from the bounding box and attribute predictions. In this case, the network will be able to recognize unseen objects by the attributes they have. Detectors in Tp only have three channels, each channel of a grid cell is the confidence score
of the corresponding bounding box. The network will make (80*80+40*40+20*20) *3 = 25200 predictions in total. In the end, the output block To is the concatenation of bounding box, attributes and objectness prediction.
3.3 LOSS FUNCTION DESIGN
The total loss in our algorithm is composed of three parts: localization loss, attribute loss and objectness loss. In the following sections, we will show how the loss functions are designed and implemented.
3.3.1 LOCALIZATION LOSS
In YOLOV5, the localization loss is calculated by cIoU loss proposed by Zheng et al. (2021). Compare to the original IoU loss, cIoU loss is more precise and converges much faster. To calculate the cIoU loss, we need to calculate some parameters first:
IoU = Areapred ∩Areagt Areapred ∪Areagt
α = υ
(1− IoU) + υ
υ = 4 π2 ∗ (arctanwgt hgt − arctanwpred hpred )2
When the loss is calculated, not all bounding box predictions are used. A predicted bounding box is used only when its center point falls into the same grid cell as the ground truth bounding box’s center point and has the highest IoU among three predictions in the same grid cell. We denote the selection of bounding box predictions using λi, it is set to 1 when selected otherwise 0. Since our output has three different feature levels, we define n as the level number. The total number of predictions m is equal to 80*80*3=19200, 40*40*3=4800, 20*20*3=1200 when n is equal to 1, 2, 3 respectively. The formal localization loss is defined as the summation of mean cIoU loss in each layer, shown in the following function. Where d is the distance between two boxes’ center and c is the diagonal length of the minimum enclosing box of two boxes.
Lloc = n∑ j=1 ( 1 m m∑ i=1 λi(1− IoUi + d2i c2i + αiυi))
3.3.2 ATTRIBUTE LOSS
For calculating the attribute loss, we used Binary Cross Entropy (BCE) loss with sigmoid function ( σ). Since the ground-truth value of an attribute egt is either 0 or 1, the predicted attribute value epred need to be passed into a sigmoid function first to regulate the number to between 0 and 1. A new term z is introduced in this function, and it represents the total number of attributes in the vector, which is 16. A bounding box’s attribute loss is the summation of BCE loss on every attribute term. All other symbols remain the same meaning as in section 3.3.1.
Latt = n∑ j=1 ( 1 m m∑ i=1 z∑ k=1 λi(e gt i,k(− log(σ(e pred i,k )) + (1− e gt i,k)(− log(1− σ(e pred i,k ))))
3.3.3 OBJECTNESS LOSS
Different from localization loss and attribute loss, the objectness loss is calculated from all predictions rather than positive predictions only. Thus, the term λi is dropped in the objectness calculation. BCE loss with sigmoid function is also used in the objectness loss calculation. pgt is the ground truth probability of the presence of an object in the bounding box, which equals 1 when an object is present and 0 otherwise. ppred is the predicted confidence score, regulated between 0 and 1 with the sigmoid function.
Lobj = n∑ j=1 ( 1 m m∑ i=1 pgti (− log(σ(p pred i )) + (1− p gt i )(− log(1− σ(p pred i )))
4 EXPERIMENTS
4.1 DATASET SETTING
In the following Table 1 we show how the original YCB Video dataset is splitted to our train, validation and test dataset. The YCB Video dataset is consist of 92 videos, and each has thousands of frames. 21 daily objects are included in the dataset, and some of them are placed in the scene of a video. Since during the video taking, the setup of objects does not change, objects in a video record remain constant. Thus, once the unseen objects is picked from all objects, all images that contain any of these four objects need to be allocated to the test dataset. We picked four objects as unseen objects in our split, they are gelatin box, mustard bottle, pitcher base and power drill, labelled with bold text in Table 1. In terms of detection difficulty, gelating box and mustard bottle are easy, pitcher base is harder, and power drill is hardest. This conclusion is carried out based on the attributes they have compared to all the seen objects, it is also proved later with the detecting score.
After four unseen objects are picked, 31 videos that only contain seen objects are selected to be used in the train and validation dataset. The rest 61 videos that contain at least one unseen object are allocated to the test dataset. The 31 videos only contain seen objects have 45272 frames in total. We randomly picked 20% of them (9040 images) and put them into the validation dataset, the rest 80% (36232 images) are placed in the train dataset. For test dataset, all frames in the 61 videos (88664 images) are used. In Table 1, we also show the number of labels for each object in each dataset.
4.2 TESTING RESULT
During testing, all images in the test dataset were used. Since this work focus on detection only, we will only evaluate the recall rate of the algorithm. We define that if an object’s ground truth bounding
box (GT) and the predicted bounding box’s IoU is bigger than 0.5, the object is successfully detected, noted as True Positive (TP). The recall rate is defined as:
Recall = TP
GT
In the following table, the recall for each object is shown.
Table 2: Recall for each object
Number 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Recall 0.88 0.72 0.83 0.85 0.73 0.75 0.61 0.46 0.86 0.73 0.05 0.51 0.63 0.90 0.07 0.21 0.44 0.26 0.42 0.21 0.72
4.3 DISCUSSION
Base on the testing result, we can say that the algorithm works well when there are similar seen objects, but not for onjects that very different from seen objects. From Table 2, the first thing we notice is that the recall rates for different objects have a large variation. Some seen objects even have a lower recall rate than unseen objects. The main reason causing this is the unbalanced number of labels in the train dataset and the test dataset. For example, the number of train labels for sugar box is 0.75 times of the number of test labels, and its recall reached 0.83. For wood block, its number of train labels is only half the number of test labels, and its recall rate is only 0.21. Another reason that affects the recall rate is the variation of illumination. Since YCB Video dataset is consists of videos, images in the train dataset can only cover a very limited range of illumination conditions. Thus, the algorithm will perform worse on the test images with illumination conditions that have never been met before.
For the four unseen objects, the recall rate for object number 4 and 7 is higher than object number 10 and 14, which is similar to what we expected. Especially for object numer 4, its recall rate is even higher than many seen objects. Object number 4 has the highest recall rate among all unseen objects because its color and shape have commonly appeared on seen objects. However, attributes contained in object number 10 and 14 are hardly seen in the train dataset. Thus, we can conclude that seen objects with similar color or shape to unseen objects can increase the detection rate of unseen objects.
5 CONCLUSION
In this paper, we proposed to use a modified YOLOv5 neural network to perform generalized zeroshot detection on seen and unseen objects. We also proposed a novel splitting method for YCB Video dataset to train and test gZSD algorithms. By changing the final detection layers of YOLOv5, we have significantly improved its gZSD performance on the YCB Video dataset split with our proposal. For industrial robots that works in a flexible and dynamic manufacturing environment, our gZSD algorithm for detecting daily objects is a more feasible solution than the traditional vision algorithm that requires training for every object. In our experiment, we found that our algorithm is more sensitive to color rather than shapes. Thus, in the future, we can experiment on RGB-D images rather than RGB images to evaluate the improvement brought by the extra depth channel. | 1. What is the main contribution of the paper regarding zero-shot detection?
2. What are the strengths and weaknesses of the proposed approach, particularly in its novelty and empirical results?
3. Why is the mAP score not reported, and what are the implications of this omission?
4. How does the reviewer assess the relevance and scalability of the attribute list used in the paper?
5. Are there any concerns or suggestions regarding the comparisons with prior art and the reporting of results?
6. What are some additional papers on zero-shot detection that could have been cited in the review? | Summary Of The Paper
Review | Summary Of The Paper
The paper performs zero shot detection of seen and unseen objects in scenarios with more fine-grained division of objects. For example, in practical computer vision applications such as industrial and indoor environments, it might be necessary to differentiate between the same object in different colors, sizes and shapes. In this paper, unseen objects can be classified based on the attributes: white, blue, red, yellow, silver, black, brown, bottle, cup, can, clamp, slim, circle, cylinder, box, rectangular. The authors perform zero-shot detection using a simple approach of predicting attributes. The attributes are labeled in the training images and the method has to generalize for unseen objects in the test images.
Review
Strengths
The overall problem seems to be practically relevant. Most existing object detectors and classifiers are desired not to overfit to the color or size of the object as long as the objects are from the same class. If the neural network saw only black dogs during training, it is desirable to predict white dogs also in deployment.
Generalized zero-shot learning is less explored compared to zero-shot learning.
Weaknesses
The novelty is not very strong. Attribute-based zero-shot learning is a common paradigm and is discussed in [A,B,C,D]. Attribute-based ZSD is also discussed in [E]. The authors extend it to detection on a dataset with different classes of objects than seen in other papers. Further, they use YOLOv5 instead of YOLOv2 as the backbone. There are some changes to the neural net architecture but there is no empirical evidence to suggest those are beneficial.
The empirical results are underwhelming. The recall ratio is very low for 2 classes. If the authors are claiming zero shot detection, then the method should be able to detect unseen objects ideally even if those attributes are not a lot in the training set. Further, just a single dataset is used and there is no comparison to prior art. If none of the prior art is applicable to this problem, the authors need to explain why that is the case.
The attribute list is too limiting and will not scale. These attribute lists are just directed towards this single dataset. For general applications, it might be difficult to list out and label all attributes in all images. Further, a hierarchical label might be more useful. Due to this reason, it might be more beneficial to learn embeddings/representations that are more general though not interpretable.
Questions
Why is the mAP which is the standard score for object detection, not reported? The algorithm can still generate false positive. In [E], the authors report mAP over all classes as they do not classify the objects.
More papers on ZSD that are not cited: [F,G,H,I].
References:
[A] https://ieeexplore.ieee.org/document/6571196 [B] https://ieeexplore.ieee.org/document/6126373 [C] https://ieeexplore.ieee.org/document/6751374 [D] https://link.springer.com/chapter/10.1007/978-3-642-15555-0_10 [E] https://arxiv.org/pdf/1803.07113.pdf [F] https://www.sciencedirect.com/science/article/pii/S2667241321000124 [G]https://openaccess.thecvf.com/content_CVPR_2020/html/Zhu_Dont_Even_Look_Once_Synthesizing_Features_for_Zero-Shot_Detection_CVPR_2020_paper.html [H] https://ojs.aaai.org/index.php/AAAI/article/view/6868 [I] https://arxiv.org/abs/1805.06157 |
ICLR | Title
Zero-shot detection of daily objects in YCB video dataset
Abstract
To let robots be able to manipulate objects, they have to sense the location of objects. With the development of visual data collecting and processing technology, robots are gradually evolving to localize objects in a greater field of view rather than being limited to a small space where the object could appear. To train such a robot vision system, pictures of all the objects need to be taken under various orientations and illumination. In the traditional manufacturing environment, this is applicable since objects involved in the production process does not change frequently. However, in the vision of smart manufacturing and high-mix-low-volume production, parts and products for robots to handle may change frequently. Thus, it is unrealistic to re-training the vision system for new products and tasks. Under this situation, we discovered the necessity to introduce a hot concept which is zero-shot object detection. Zero-shot object detection is a subset of unsupervised learning, and it aims to detect novel objects in the image with the knowledge learned from and only from seen objects. With zero-shot object detection algorithm, time can be greatly saved from collecting training data and training the vision system. Previous works focus on detecting objects in outdoor scenes, such as bikes, car, people, and dogs. The detection of daily objects is actually more challenging since the knowledge can be learned from each object is very limited. In this work, we explore the zero-shot detection of daily objects in indoor scenes since the objects’ size and environment are closely related to the manufacturing setup. The YCB Video Dataset is used in this work, which contains 21 objects in various categories. To the best of our knowledge, no previous work has explored zero-shot detection in this object size level and on this dataset.
1 INTRODUCTION
Industrial robots have received more and more attention in the manufacturing industry due to the rising cost of human labour and decreasing cost of industrial robots (Carlisle, 2017). Since robots can handle heavy and repetitive jobs better than human, many manufacturing planets have replaced human labours on the production line with robots (Robla-Gómez et al., 2017). In today’s manufacturing pattern of mass production, an industrial robot is only in charge of a certain processing step with dedicated parts. This manufacturing scenario does not require the robot to change target objects to work with frequently. However, with the recent development of control and communication technologies, the manufacturing industry is gradually evolving to high-mix-low-volume production that provides personalized product for customers (Lu et al., 2020). This is also known as smart manufacturing, in this scenario, the manufacturing system will become more flexible. Instead of tied to a specific task, robots will be allocated to various tasks depend on demand. Which requires the robots to be able to recognize a wide range of objects that could be involved during production.
With the development over years, today’s object detection and recognition algorithms such as FasterRCNN (Ren et al., 2015), SSD (Liu et al., 2016), YOLO (Redmon et al., 2016) and EfficientDet(Tan et al., 2020) have reached a high performance. With enough collected training data, those offthe-shelf algorithms can be easily applied to the robots’ vision system to recognize and localize objects involved in the production process. However, the data collection, labelling and training of a neural network is a time-consuming process and requires expertise in the machine vision filed. Even there are data generation method that can generate synthetic images and labels from CAD models
(Wohlhart & Lepetit, 2015), training a new neural network for additional new parts frequently is still not realistic in personalized production.
Zero-shot learning (ZSL) is a learning paradigm that learns knowledge from seen categories and apply the knowledge to new categories in order to recognize objects that is never seen before. In the work carried out by Zhang & Saligrama (2015) and Zhang & Saligrama (2016), zero-shot learning algorithms have already achieved a reasonable accuracy on classifying unseen objects. While zeroshot learning only aim to recognize unseen categories, a more realistic problem called generalized zero-shot detection (gZSL) is proposed by Xian et al. (2017), which aims to recognize both seen and unseen categories. However, gZSL still have flaws and cannot be directly applied to solve the previously mentioned issues. For both ZSL and gZSL, they only focus on recognizing the object in an image. Thus, a big assumption is made before applying those algorithms, which is only one object is appearing in the image and it is always located in the middle of the image. In this setting, ZSL and gZSL algorithms only need to analysis the categorical information in the image but not the location information. In other word, it takes the whole image as one object proposal. In real life, cameras attached on robots are always moving with the robot and cannot make sure the target object is always located in the middle of the camera’s view. Thus, generalized zero-shot detection (gZSD) and gZSL have to be combined to achieve a vision system that can detect and recognize unseen objects. Where gZSD oversees localize seen and unseen objects in the field of view and gZSL oversees the recognition and categorization of seen and unseen objects.
They are existing works in this field try to solve gZSD and gZSL at the same time to make the whole scenario completed. For example, Rahman et al. (2018) tried to combine Faster RCNN with gZSL module to generate object proposals and class predictions. Zhu et al. (2019) worked on YOLOv2, using a single stage detector to generate object proposals. Previous works have all worked on outdoor datasets such as MS COCO (Lin et al., 2014) and Pascal VOC(Everingham et al., 2010). Different objects in those datasets are categorized in categories. For example, man, woman and children are very different in term of their visual appearance but they are all allocated into the ‘people’ category. The varieties of objects in one category can increase the generality of the trained algorithm. In the case of detecting daily objects, datasets such as YCB Video (Xiang et al., 2017) is chosen. 3D objects in this dataset is similar to the size and characteristics of the objects could appear in production pipeline. However, each object is unique and cannot be categorized into categories in 3D object datasets. Which brings the challenge of harder generalization to unseen objects.
Regarding the problems and challenges we found in future manufacturing environment and gZSD, we propose to modify the base version of YOLOv5 (Jocher et al., 2021) to perform gZSL on the YCB Video dataset. Compare to two stage detectors such as Faster RCNN, one stage detectors such as YOLO and SSD are much faster to process. YOLOv5 as the latest version of YOLO series detectors, has been proved to be faster and better than previous versions. Compare to the work done by Zhu et al. (2019) which used a modified YOLOv2 (Redmon & Farhadi, 2017) to perform gZSD, YOLOv5 can output objects proposals in three different levels, thus have better coverage on object sizes. For training and testing our algorithm, four objects out of 21 objects in YCB Video dataset are picked as unseen objects. Any image that contains these four unseen objects will never appear during training but will be used during testing. For every object, their class labeling is translated into attribute vectors that’s contains the colour and shape information of each object. Thus, we transformed the classical single label problem into a multi-label problem to let the neural network learn attribute labels and apply it to unseen objects. It needs to be notice that, in this work, we only work on the gZSD problem but not gZSL. Which mean we only aim to localize seen and unseen object in the images by bounding boxes but not define the class label of the objects.
Our contributions in this paper are in three folds: 1.A novel neural network structure that based on YOLOv5 and able to perform generalized zero-shot detection. The output bounding boxes can be further combined with other gZSL algorithm to achieve full zero-shot object detection and recognition. 2.A novel splitting method for YCB Video dataset that splits the dataset by seen and unseen objects. This splitting can be used for both gZSD and gZSL research that related to daily objects. 3.A novel attribute labelling method for objects in YCB Video dataset. Covert the class labelling to 16 attributes that represents colour and shape information of an object for the neural network to learn.
2 RELATED WORK
2.1 OBJECT DETECTION
Researches on object detection and recognition have been developing rapidly in the past decade. The earliest image classification algorithm can be traced to the work published by Krizhevsky et al. (2012). Since then, the recognition speed and accuracy of image classification algorithms have been improving continuously. These algorithms can be divided into two categories: two-stage detectors and one-stage detectors. Two-stage detectors such as Faster RCNN Ren et al. (2015)and R-FCN Dai et al. (2016)generate object proposals by Region Proposal Network (RPN), then perform object classification based on these proposals. One-stage detectors like (Liu et al., 2016), YOLO (Redmon et al., 2016) and EfficientDet(Tan et al., 2020) generate object proposals and classify objects at the same time by dividing the image into grids. Thus, one-stage detectors’ processing speed is faster than two-stage detectors’ and gained more attention recently.
YOLOv5 (Jocher et al., 2021) used in this paper is the fifth version of the classical YOLO detector. However, there is still an argument in this field about whether this algorithm is qualified to be considered as the fifth version. The maintainer of YOLOv5 also has not published a paper to justify the algorithm’s ability. However, it is tested that on the MS COCO dataset (Lin et al., 2014), YOLOv5’s speed and accuracy both outperformed the state-of-the-art algorithm, which is Google’s EfficientDet. YOLOv5 is composed of three parts: Backbone, Neck and Head. When an image is passed into the network, it is first processed by the DarkNet (Bochkovskiy et al., 2020) backbone. Then passed into the PANet (Wang et al., 2019) neck, which processes and split the feature map into three different feature levels to have better coverage on objects in different sizes. Finally, the YOLO detector head will output predictions in three levels base on the feature maps. In this work, we will use YOLOv5 as the base algorithm, modify the detectors in the head part of the neural network, enable it to detect both seen and unseen objects.
2.2 ZERO-SHOT LEARNING
Given images with class labels, zero-shot learning (ZSL) aims to classify an unseen class based on knowledge learned from seen classes (Fu et al., 2018). ZSL works by learning semantic information from seen classes and reassemble the semantic attributes to predict unseen classes (Zhang & Saligrama, 2016)(Rahman et al., 2018). In other words, the algorithm learns the mapping from the visual domain to the semantic domain during learning and make predictions by mapping from the semantic domain to the visual domain. However, ZSL algorithms are designed for recognizing unseen classes, their performance degrades when both seen and unseen classes need to be recognized.Xian et al. (2017) proposed generalized ZSL (gZSL), which released the constraint of recognition targets and included seen classes as well. However, ZSL and gZSL both assume that only one target exists in an image and it is located right in the middle of the image. They still lack the ability to isolate an object from the background or the occlusion of other objects.
2.3 ZERO-SHOT DETECTION AND RECOGNITION
There are several methods proposed recently to solve the zero-shot detection and recognition problem. These algorithms can not only detect the location of seen and unseen objects in an image but can also classify them. Most of them took the approach with two-stage detectors. In the research carried out by Bansal et al. (2018), Edge-Box was used to generate regional proposals, and Regional Proposal Network (RPN) was used in work done by Rahman et al. (2018). Using two-stage detectors is an easier way to achieve generalized zero-shot detection (gZSL) since these proposal generators do not need to be trained to work with unseen objects. They will generate proposals regardless of the content inside the bounding box. The confidence and class prediction are handled by following gZSL network. However, these methods are inevitably slow due to the workload of class evaluation that comes with a large number of regional proposals. Zhu et al. (2019) proposed to use YOLOv2 as the backbone to detect and classify unseen objects. The use of one-stage detector made an improvement in terms of detection speed compared to two-stage detectors.
The mentioned methods above have all focused on outdoor scenes, and their commonly used datasets are MS COCO (Lin et al., 2014) and Pascal VOC (Everingham et al., 2010). The objects inside these
two datasets have a high variability within each class. For a robot that works in an indoor environment, recognizing objects at class level is not good enough since objects belong to the same category may have very different usage. For example, a housekeeping robot should be able to differentiate the blue cup and the pink cup and hand them to a boy and a girl, respectively, rather than recognize them both as cups. Abdalwhab & Liu (2019) have tried to use SUN RGB-D dataset (Song et al., 2015) to perform gZSD in an indoor environment. However, objects in the SUN RGB-D dataset is labelled by class and in furniture size. In this work, we will use the YCB Video dataset (Xiang et al., 2017), which includes 21 distinctive objects in desktop size. The object size and environment in YCB Video dataset are closer related to the setup we may encounter in the manufacturing environment.
3 METHODOLOGY
3.1 PROBLEM DEFINITION
Assuming we have n objects labelled as gi = {bi,ai}ni . Where the 4-dimensional vector bi = {xi, yi, wi, hi} denotes the ground truth bounding boxes’center point location x, y, width and height w, h. ai is a 16-dimensional vector describes the feature attribute of the object gi. In the training dataset, all objects are all seen objects, represented by gi ∈ Oseen. The valadation dataset is also composed by seen objects only. In the test dataset, both seen Oseen objects and unseen objects Ounseen are present, Oseen∩Ounseen = ∅. The goal of this work is to predict {bpred, cpred,apred} for gpred ∈ Oseen ∪Ounseen. The extra term cpred represents the confidence level of the existance of an object within bounding box bpred.
3.2 NETWORK ARCHITECTURE
The simplified architecture of our YOLOv5-ZS network is shown in Figure 1. The network takes an RGB image as input and output predictions in three feature levels. The input image size is 640*640*3 in our network since all images in the YCB Video dataset are 640*480 pixels. They are padded with grey pixels on top and bottom to become a square shape. The image is first processed by the Backbone (DarkNet) and Neck (PANet). The size of DarkNet and PANet is changeable in YOLOv5. By changing the number of convolutional layers and feature map depth, four versions of YOLOv5can be created, and they are YOLOv5s (small), YOLOv5m (medium), YOLOv5l (large) and YOLOv5x (extra-large). As the network becomes bigger and deeper, their detection accuracy will increase, but the detection speed decreases. In our work, we chose to use YOLOv5s. Tensor Tf represents the block of are extracted features in three levels, they are passed to the following blocks: bounding box and attribute prediction (Tb), feature concatenation (Tc), objectness prediction (Tp) and output concatenation (To). They are detailly explained in the following sections.
3.2.1 FEATURE EXTRACTION
The Tf block is composed of three tensors, and its structure is inherited from PANet. Path Aggregation Network (PANet) proposed by Wang et al. (2019) is a network that has both top-down and bottom-up dataflow. The bi-directional data flow ensures the image features are kept better in the later convolutional layers. By using PANet, we can also get image features in three levels. As Figure 1 shows, the feature tensors are in size 80*80*128, 40*40*256, 20*20*512. As the network goes deeper, the feature map gets smaller and deeper. From this block, each data block will consist of three tensors. However, for easier representation and clearer graph, we represent each later block as a tensor T.
3.2.2 OBJECT LOCALIZATION
The location of bounding boxes is predicted in block Tb. Each tensor in this block has the same depth of 60. In YOLO series detectors, the width and height of the detection layer represent the number of grids on the image. For example, 80*80 means the image is evenly divided into 6400 grids, and each grid is responsible for predicting bounding boxes that the center point falls on this grid. As the grid number becomes smaller, the size of each grid becomes bigger and hence have a better focus on bigger objects. The three different grid sizes thus allow the network to detect objects in various sizes. Each tensor’s depth in this block is 60 since the network needs to make three predictions with anchors in different width/height ratios for each grid. For each prediction, bp is 4 digits and apred is 16 digits. Thus the tensor has 3*(4+16) = 60 channels. It needs to be notice that the bounding box prediction bp = {tx, ty, tw, th} are relative to the location of the grid and size of the anchor. The actual location and size of the bounding box bpred need to be calculated with the following equations.
x = 2 ∗ σ(tx)− 0.5 + cx y = 2 ∗ σ(ty)− 0.5 + cy w = pw ∗ (2 ∗ σ(tw))2 h = ph ∗ (2 ∗ σ(th))2
Where cx, cy indicate the location of the top-left corner of a grid cell and pw, ph indicate the width and height of the anchor.
3.2.3 ATTRIBUTE PREDICTION
As mentioned in the previous section, block Tb also needs to predict the 16-dimensional attribute vector for each object. Unlike other works describing objects with semantic vectors that learn by Word2Vec or FastText, our attribute vector is carried out by human eye evaluation. The elements in the attribute vector are some common colors and shapes that appear in all objects. There are two reasons that we took a different approach. (1) Class names such as “people” in previous works can be easily translated into semantic vectors using existing algorithms. While object names in YCB Video dataset are instance-specific, such as “master chef can”, cannot be directly translated. (2) There is no visual variation for each object, and we can determine the attributes an object contains by human evaluation. The 16-dimensional attribute vector contains: white, blue, red, yellow, silver, black, brown, bottle, cup, can, clamp, slim, circle, cylinder, box, rectangular. Each object gi will be described by several attributes in the form of one-hot embedding. For example, object ‘red cup’ has the attributes of red, cup, circle, cylinder will have the attribute vector [0 0 1 0 0 0 0 0 1 0 0 0 1 1 0 0]. The corresponding location of red, cup, circle, cylinder is labelled ‘1’ and the rest are labelled ‘0’. The predicted output apred is a 16-dimensional vector of floating-point numbers. Each number’s value is between 0 and 1, indicate the confidence level of an attribute.
3.2.4 CONFIDENCE PREDICTION
After bounding box and attribute prediction, block Tf and block Tb are concatenated together to form block Tc. Our objectness prediction layer is learned from the concatenated layer Tc. In the original YOLOv5 detection layer, objectness confidence is learned in the same block as Tb. However, learning objectness confidence only from the feature layer will cause the network to only recognize seen objects and treat all unseen objects as background. Thus, we concatenate the Tf block and Tb block together to let the network also learns from the bounding box and attribute predictions. In this case, the network will be able to recognize unseen objects by the attributes they have. Detectors in Tp only have three channels, each channel of a grid cell is the confidence score
of the corresponding bounding box. The network will make (80*80+40*40+20*20) *3 = 25200 predictions in total. In the end, the output block To is the concatenation of bounding box, attributes and objectness prediction.
3.3 LOSS FUNCTION DESIGN
The total loss in our algorithm is composed of three parts: localization loss, attribute loss and objectness loss. In the following sections, we will show how the loss functions are designed and implemented.
3.3.1 LOCALIZATION LOSS
In YOLOV5, the localization loss is calculated by cIoU loss proposed by Zheng et al. (2021). Compare to the original IoU loss, cIoU loss is more precise and converges much faster. To calculate the cIoU loss, we need to calculate some parameters first:
IoU = Areapred ∩Areagt Areapred ∪Areagt
α = υ
(1− IoU) + υ
υ = 4 π2 ∗ (arctanwgt hgt − arctanwpred hpred )2
When the loss is calculated, not all bounding box predictions are used. A predicted bounding box is used only when its center point falls into the same grid cell as the ground truth bounding box’s center point and has the highest IoU among three predictions in the same grid cell. We denote the selection of bounding box predictions using λi, it is set to 1 when selected otherwise 0. Since our output has three different feature levels, we define n as the level number. The total number of predictions m is equal to 80*80*3=19200, 40*40*3=4800, 20*20*3=1200 when n is equal to 1, 2, 3 respectively. The formal localization loss is defined as the summation of mean cIoU loss in each layer, shown in the following function. Where d is the distance between two boxes’ center and c is the diagonal length of the minimum enclosing box of two boxes.
Lloc = n∑ j=1 ( 1 m m∑ i=1 λi(1− IoUi + d2i c2i + αiυi))
3.3.2 ATTRIBUTE LOSS
For calculating the attribute loss, we used Binary Cross Entropy (BCE) loss with sigmoid function ( σ). Since the ground-truth value of an attribute egt is either 0 or 1, the predicted attribute value epred need to be passed into a sigmoid function first to regulate the number to between 0 and 1. A new term z is introduced in this function, and it represents the total number of attributes in the vector, which is 16. A bounding box’s attribute loss is the summation of BCE loss on every attribute term. All other symbols remain the same meaning as in section 3.3.1.
Latt = n∑ j=1 ( 1 m m∑ i=1 z∑ k=1 λi(e gt i,k(− log(σ(e pred i,k )) + (1− e gt i,k)(− log(1− σ(e pred i,k ))))
3.3.3 OBJECTNESS LOSS
Different from localization loss and attribute loss, the objectness loss is calculated from all predictions rather than positive predictions only. Thus, the term λi is dropped in the objectness calculation. BCE loss with sigmoid function is also used in the objectness loss calculation. pgt is the ground truth probability of the presence of an object in the bounding box, which equals 1 when an object is present and 0 otherwise. ppred is the predicted confidence score, regulated between 0 and 1 with the sigmoid function.
Lobj = n∑ j=1 ( 1 m m∑ i=1 pgti (− log(σ(p pred i )) + (1− p gt i )(− log(1− σ(p pred i )))
4 EXPERIMENTS
4.1 DATASET SETTING
In the following Table 1 we show how the original YCB Video dataset is splitted to our train, validation and test dataset. The YCB Video dataset is consist of 92 videos, and each has thousands of frames. 21 daily objects are included in the dataset, and some of them are placed in the scene of a video. Since during the video taking, the setup of objects does not change, objects in a video record remain constant. Thus, once the unseen objects is picked from all objects, all images that contain any of these four objects need to be allocated to the test dataset. We picked four objects as unseen objects in our split, they are gelatin box, mustard bottle, pitcher base and power drill, labelled with bold text in Table 1. In terms of detection difficulty, gelating box and mustard bottle are easy, pitcher base is harder, and power drill is hardest. This conclusion is carried out based on the attributes they have compared to all the seen objects, it is also proved later with the detecting score.
After four unseen objects are picked, 31 videos that only contain seen objects are selected to be used in the train and validation dataset. The rest 61 videos that contain at least one unseen object are allocated to the test dataset. The 31 videos only contain seen objects have 45272 frames in total. We randomly picked 20% of them (9040 images) and put them into the validation dataset, the rest 80% (36232 images) are placed in the train dataset. For test dataset, all frames in the 61 videos (88664 images) are used. In Table 1, we also show the number of labels for each object in each dataset.
4.2 TESTING RESULT
During testing, all images in the test dataset were used. Since this work focus on detection only, we will only evaluate the recall rate of the algorithm. We define that if an object’s ground truth bounding
box (GT) and the predicted bounding box’s IoU is bigger than 0.5, the object is successfully detected, noted as True Positive (TP). The recall rate is defined as:
Recall = TP
GT
In the following table, the recall for each object is shown.
Table 2: Recall for each object
Number 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Recall 0.88 0.72 0.83 0.85 0.73 0.75 0.61 0.46 0.86 0.73 0.05 0.51 0.63 0.90 0.07 0.21 0.44 0.26 0.42 0.21 0.72
4.3 DISCUSSION
Base on the testing result, we can say that the algorithm works well when there are similar seen objects, but not for onjects that very different from seen objects. From Table 2, the first thing we notice is that the recall rates for different objects have a large variation. Some seen objects even have a lower recall rate than unseen objects. The main reason causing this is the unbalanced number of labels in the train dataset and the test dataset. For example, the number of train labels for sugar box is 0.75 times of the number of test labels, and its recall reached 0.83. For wood block, its number of train labels is only half the number of test labels, and its recall rate is only 0.21. Another reason that affects the recall rate is the variation of illumination. Since YCB Video dataset is consists of videos, images in the train dataset can only cover a very limited range of illumination conditions. Thus, the algorithm will perform worse on the test images with illumination conditions that have never been met before.
For the four unseen objects, the recall rate for object number 4 and 7 is higher than object number 10 and 14, which is similar to what we expected. Especially for object numer 4, its recall rate is even higher than many seen objects. Object number 4 has the highest recall rate among all unseen objects because its color and shape have commonly appeared on seen objects. However, attributes contained in object number 10 and 14 are hardly seen in the train dataset. Thus, we can conclude that seen objects with similar color or shape to unseen objects can increase the detection rate of unseen objects.
5 CONCLUSION
In this paper, we proposed to use a modified YOLOv5 neural network to perform generalized zeroshot detection on seen and unseen objects. We also proposed a novel splitting method for YCB Video dataset to train and test gZSD algorithms. By changing the final detection layers of YOLOv5, we have significantly improved its gZSD performance on the YCB Video dataset split with our proposal. For industrial robots that works in a flexible and dynamic manufacturing environment, our gZSD algorithm for detecting daily objects is a more feasible solution than the traditional vision algorithm that requires training for every object. In our experiment, we found that our algorithm is more sensitive to color rather than shapes. Thus, in the future, we can experiment on RGB-D images rather than RGB images to evaluate the improvement brought by the extra depth channel. | 1. What is the focus and contribution of the paper on zero-shot learning?
2. What are the strengths and weaknesses of the proposed approach, particularly in its application to real-world settings?
3. Do you have any concerns regarding the method's reliance on attribute encoding?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions or suggestions regarding the evaluation and comparison with other works in the review? | Summary Of The Paper
Review | Summary Of The Paper
The paper tries to generalise the generalised zero-shot learning (gZSL) problem to full images as an object detection problem instead of an image classification problem. They propose the use of the YoloV5 network as a backbone for the prediction of the attribute encoding of the detected bounding boxes. In such a way they can then use the attributes to suppress background objects/predictions based on the overlap of seen and unseen objects. Therefore they combine the localisation and objections loss with an attribute loss (BCE) to train the network. They evaluate over YCB video dataset on 21 objects for their method only.
Review
The paper presents an interesting application of ZSL. While in general ZSL has been a fairly academic problem, the author's method is able to be used in (constrained) real-world settings with multiple objects within the scene. They pose it in robotics and smart manufacturing however, can be generalised.
The approach to extend Yolo to include the attribute makes a lot of sense, however, it raises the question is all you need is the attribute encoding? No additional contrastive learning, hard-mining. In the results/discussion, you comment on the distribution, this would be a large limitation of the method as it isn't possible to have balanced distributions from real-world images of multiple objects
Evaluation is the most weakest part. Given the relatively simple adaption, different backbones could have easily been evaluated R-CNN ... with their varying backbones. Given the authors are trying to motivate a new problem, only showing their own results is not convincing.
Also, a factor for the evaluation here would be the computation time with the standard time vs accuracy debate in object detection methods.
In the related work the claim "The varieties of objects in one category can increase the generality of the trained algorithm.", is there evidence behind this (needs citation)? In general, this seems counterintuitive. The more general the class the harder it is to classify as key pixels for decision making are harder to select in a broad setting.
Typos / Clarifcations: Abstract: "objects in a greater field of view rather than being limited to a small space where the object could appear." isn't clear the use of field of view that doesn't seem to make sense in this context Related work: "many manufacturing planets have replaced" planets = plants Related work: "They are existing works in" They = There |
ICLR | Title
Zero-shot detection of daily objects in YCB video dataset
Abstract
To let robots be able to manipulate objects, they have to sense the location of objects. With the development of visual data collecting and processing technology, robots are gradually evolving to localize objects in a greater field of view rather than being limited to a small space where the object could appear. To train such a robot vision system, pictures of all the objects need to be taken under various orientations and illumination. In the traditional manufacturing environment, this is applicable since objects involved in the production process does not change frequently. However, in the vision of smart manufacturing and high-mix-low-volume production, parts and products for robots to handle may change frequently. Thus, it is unrealistic to re-training the vision system for new products and tasks. Under this situation, we discovered the necessity to introduce a hot concept which is zero-shot object detection. Zero-shot object detection is a subset of unsupervised learning, and it aims to detect novel objects in the image with the knowledge learned from and only from seen objects. With zero-shot object detection algorithm, time can be greatly saved from collecting training data and training the vision system. Previous works focus on detecting objects in outdoor scenes, such as bikes, car, people, and dogs. The detection of daily objects is actually more challenging since the knowledge can be learned from each object is very limited. In this work, we explore the zero-shot detection of daily objects in indoor scenes since the objects’ size and environment are closely related to the manufacturing setup. The YCB Video Dataset is used in this work, which contains 21 objects in various categories. To the best of our knowledge, no previous work has explored zero-shot detection in this object size level and on this dataset.
1 INTRODUCTION
Industrial robots have received more and more attention in the manufacturing industry due to the rising cost of human labour and decreasing cost of industrial robots (Carlisle, 2017). Since robots can handle heavy and repetitive jobs better than human, many manufacturing planets have replaced human labours on the production line with robots (Robla-Gómez et al., 2017). In today’s manufacturing pattern of mass production, an industrial robot is only in charge of a certain processing step with dedicated parts. This manufacturing scenario does not require the robot to change target objects to work with frequently. However, with the recent development of control and communication technologies, the manufacturing industry is gradually evolving to high-mix-low-volume production that provides personalized product for customers (Lu et al., 2020). This is also known as smart manufacturing, in this scenario, the manufacturing system will become more flexible. Instead of tied to a specific task, robots will be allocated to various tasks depend on demand. Which requires the robots to be able to recognize a wide range of objects that could be involved during production.
With the development over years, today’s object detection and recognition algorithms such as FasterRCNN (Ren et al., 2015), SSD (Liu et al., 2016), YOLO (Redmon et al., 2016) and EfficientDet(Tan et al., 2020) have reached a high performance. With enough collected training data, those offthe-shelf algorithms can be easily applied to the robots’ vision system to recognize and localize objects involved in the production process. However, the data collection, labelling and training of a neural network is a time-consuming process and requires expertise in the machine vision filed. Even there are data generation method that can generate synthetic images and labels from CAD models
(Wohlhart & Lepetit, 2015), training a new neural network for additional new parts frequently is still not realistic in personalized production.
Zero-shot learning (ZSL) is a learning paradigm that learns knowledge from seen categories and apply the knowledge to new categories in order to recognize objects that is never seen before. In the work carried out by Zhang & Saligrama (2015) and Zhang & Saligrama (2016), zero-shot learning algorithms have already achieved a reasonable accuracy on classifying unseen objects. While zeroshot learning only aim to recognize unseen categories, a more realistic problem called generalized zero-shot detection (gZSL) is proposed by Xian et al. (2017), which aims to recognize both seen and unseen categories. However, gZSL still have flaws and cannot be directly applied to solve the previously mentioned issues. For both ZSL and gZSL, they only focus on recognizing the object in an image. Thus, a big assumption is made before applying those algorithms, which is only one object is appearing in the image and it is always located in the middle of the image. In this setting, ZSL and gZSL algorithms only need to analysis the categorical information in the image but not the location information. In other word, it takes the whole image as one object proposal. In real life, cameras attached on robots are always moving with the robot and cannot make sure the target object is always located in the middle of the camera’s view. Thus, generalized zero-shot detection (gZSD) and gZSL have to be combined to achieve a vision system that can detect and recognize unseen objects. Where gZSD oversees localize seen and unseen objects in the field of view and gZSL oversees the recognition and categorization of seen and unseen objects.
They are existing works in this field try to solve gZSD and gZSL at the same time to make the whole scenario completed. For example, Rahman et al. (2018) tried to combine Faster RCNN with gZSL module to generate object proposals and class predictions. Zhu et al. (2019) worked on YOLOv2, using a single stage detector to generate object proposals. Previous works have all worked on outdoor datasets such as MS COCO (Lin et al., 2014) and Pascal VOC(Everingham et al., 2010). Different objects in those datasets are categorized in categories. For example, man, woman and children are very different in term of their visual appearance but they are all allocated into the ‘people’ category. The varieties of objects in one category can increase the generality of the trained algorithm. In the case of detecting daily objects, datasets such as YCB Video (Xiang et al., 2017) is chosen. 3D objects in this dataset is similar to the size and characteristics of the objects could appear in production pipeline. However, each object is unique and cannot be categorized into categories in 3D object datasets. Which brings the challenge of harder generalization to unseen objects.
Regarding the problems and challenges we found in future manufacturing environment and gZSD, we propose to modify the base version of YOLOv5 (Jocher et al., 2021) to perform gZSL on the YCB Video dataset. Compare to two stage detectors such as Faster RCNN, one stage detectors such as YOLO and SSD are much faster to process. YOLOv5 as the latest version of YOLO series detectors, has been proved to be faster and better than previous versions. Compare to the work done by Zhu et al. (2019) which used a modified YOLOv2 (Redmon & Farhadi, 2017) to perform gZSD, YOLOv5 can output objects proposals in three different levels, thus have better coverage on object sizes. For training and testing our algorithm, four objects out of 21 objects in YCB Video dataset are picked as unseen objects. Any image that contains these four unseen objects will never appear during training but will be used during testing. For every object, their class labeling is translated into attribute vectors that’s contains the colour and shape information of each object. Thus, we transformed the classical single label problem into a multi-label problem to let the neural network learn attribute labels and apply it to unseen objects. It needs to be notice that, in this work, we only work on the gZSD problem but not gZSL. Which mean we only aim to localize seen and unseen object in the images by bounding boxes but not define the class label of the objects.
Our contributions in this paper are in three folds: 1.A novel neural network structure that based on YOLOv5 and able to perform generalized zero-shot detection. The output bounding boxes can be further combined with other gZSL algorithm to achieve full zero-shot object detection and recognition. 2.A novel splitting method for YCB Video dataset that splits the dataset by seen and unseen objects. This splitting can be used for both gZSD and gZSL research that related to daily objects. 3.A novel attribute labelling method for objects in YCB Video dataset. Covert the class labelling to 16 attributes that represents colour and shape information of an object for the neural network to learn.
2 RELATED WORK
2.1 OBJECT DETECTION
Researches on object detection and recognition have been developing rapidly in the past decade. The earliest image classification algorithm can be traced to the work published by Krizhevsky et al. (2012). Since then, the recognition speed and accuracy of image classification algorithms have been improving continuously. These algorithms can be divided into two categories: two-stage detectors and one-stage detectors. Two-stage detectors such as Faster RCNN Ren et al. (2015)and R-FCN Dai et al. (2016)generate object proposals by Region Proposal Network (RPN), then perform object classification based on these proposals. One-stage detectors like (Liu et al., 2016), YOLO (Redmon et al., 2016) and EfficientDet(Tan et al., 2020) generate object proposals and classify objects at the same time by dividing the image into grids. Thus, one-stage detectors’ processing speed is faster than two-stage detectors’ and gained more attention recently.
YOLOv5 (Jocher et al., 2021) used in this paper is the fifth version of the classical YOLO detector. However, there is still an argument in this field about whether this algorithm is qualified to be considered as the fifth version. The maintainer of YOLOv5 also has not published a paper to justify the algorithm’s ability. However, it is tested that on the MS COCO dataset (Lin et al., 2014), YOLOv5’s speed and accuracy both outperformed the state-of-the-art algorithm, which is Google’s EfficientDet. YOLOv5 is composed of three parts: Backbone, Neck and Head. When an image is passed into the network, it is first processed by the DarkNet (Bochkovskiy et al., 2020) backbone. Then passed into the PANet (Wang et al., 2019) neck, which processes and split the feature map into three different feature levels to have better coverage on objects in different sizes. Finally, the YOLO detector head will output predictions in three levels base on the feature maps. In this work, we will use YOLOv5 as the base algorithm, modify the detectors in the head part of the neural network, enable it to detect both seen and unseen objects.
2.2 ZERO-SHOT LEARNING
Given images with class labels, zero-shot learning (ZSL) aims to classify an unseen class based on knowledge learned from seen classes (Fu et al., 2018). ZSL works by learning semantic information from seen classes and reassemble the semantic attributes to predict unseen classes (Zhang & Saligrama, 2016)(Rahman et al., 2018). In other words, the algorithm learns the mapping from the visual domain to the semantic domain during learning and make predictions by mapping from the semantic domain to the visual domain. However, ZSL algorithms are designed for recognizing unseen classes, their performance degrades when both seen and unseen classes need to be recognized.Xian et al. (2017) proposed generalized ZSL (gZSL), which released the constraint of recognition targets and included seen classes as well. However, ZSL and gZSL both assume that only one target exists in an image and it is located right in the middle of the image. They still lack the ability to isolate an object from the background or the occlusion of other objects.
2.3 ZERO-SHOT DETECTION AND RECOGNITION
There are several methods proposed recently to solve the zero-shot detection and recognition problem. These algorithms can not only detect the location of seen and unseen objects in an image but can also classify them. Most of them took the approach with two-stage detectors. In the research carried out by Bansal et al. (2018), Edge-Box was used to generate regional proposals, and Regional Proposal Network (RPN) was used in work done by Rahman et al. (2018). Using two-stage detectors is an easier way to achieve generalized zero-shot detection (gZSL) since these proposal generators do not need to be trained to work with unseen objects. They will generate proposals regardless of the content inside the bounding box. The confidence and class prediction are handled by following gZSL network. However, these methods are inevitably slow due to the workload of class evaluation that comes with a large number of regional proposals. Zhu et al. (2019) proposed to use YOLOv2 as the backbone to detect and classify unseen objects. The use of one-stage detector made an improvement in terms of detection speed compared to two-stage detectors.
The mentioned methods above have all focused on outdoor scenes, and their commonly used datasets are MS COCO (Lin et al., 2014) and Pascal VOC (Everingham et al., 2010). The objects inside these
two datasets have a high variability within each class. For a robot that works in an indoor environment, recognizing objects at class level is not good enough since objects belong to the same category may have very different usage. For example, a housekeeping robot should be able to differentiate the blue cup and the pink cup and hand them to a boy and a girl, respectively, rather than recognize them both as cups. Abdalwhab & Liu (2019) have tried to use SUN RGB-D dataset (Song et al., 2015) to perform gZSD in an indoor environment. However, objects in the SUN RGB-D dataset is labelled by class and in furniture size. In this work, we will use the YCB Video dataset (Xiang et al., 2017), which includes 21 distinctive objects in desktop size. The object size and environment in YCB Video dataset are closer related to the setup we may encounter in the manufacturing environment.
3 METHODOLOGY
3.1 PROBLEM DEFINITION
Assuming we have n objects labelled as gi = {bi,ai}ni . Where the 4-dimensional vector bi = {xi, yi, wi, hi} denotes the ground truth bounding boxes’center point location x, y, width and height w, h. ai is a 16-dimensional vector describes the feature attribute of the object gi. In the training dataset, all objects are all seen objects, represented by gi ∈ Oseen. The valadation dataset is also composed by seen objects only. In the test dataset, both seen Oseen objects and unseen objects Ounseen are present, Oseen∩Ounseen = ∅. The goal of this work is to predict {bpred, cpred,apred} for gpred ∈ Oseen ∪Ounseen. The extra term cpred represents the confidence level of the existance of an object within bounding box bpred.
3.2 NETWORK ARCHITECTURE
The simplified architecture of our YOLOv5-ZS network is shown in Figure 1. The network takes an RGB image as input and output predictions in three feature levels. The input image size is 640*640*3 in our network since all images in the YCB Video dataset are 640*480 pixels. They are padded with grey pixels on top and bottom to become a square shape. The image is first processed by the Backbone (DarkNet) and Neck (PANet). The size of DarkNet and PANet is changeable in YOLOv5. By changing the number of convolutional layers and feature map depth, four versions of YOLOv5can be created, and they are YOLOv5s (small), YOLOv5m (medium), YOLOv5l (large) and YOLOv5x (extra-large). As the network becomes bigger and deeper, their detection accuracy will increase, but the detection speed decreases. In our work, we chose to use YOLOv5s. Tensor Tf represents the block of are extracted features in three levels, they are passed to the following blocks: bounding box and attribute prediction (Tb), feature concatenation (Tc), objectness prediction (Tp) and output concatenation (To). They are detailly explained in the following sections.
3.2.1 FEATURE EXTRACTION
The Tf block is composed of three tensors, and its structure is inherited from PANet. Path Aggregation Network (PANet) proposed by Wang et al. (2019) is a network that has both top-down and bottom-up dataflow. The bi-directional data flow ensures the image features are kept better in the later convolutional layers. By using PANet, we can also get image features in three levels. As Figure 1 shows, the feature tensors are in size 80*80*128, 40*40*256, 20*20*512. As the network goes deeper, the feature map gets smaller and deeper. From this block, each data block will consist of three tensors. However, for easier representation and clearer graph, we represent each later block as a tensor T.
3.2.2 OBJECT LOCALIZATION
The location of bounding boxes is predicted in block Tb. Each tensor in this block has the same depth of 60. In YOLO series detectors, the width and height of the detection layer represent the number of grids on the image. For example, 80*80 means the image is evenly divided into 6400 grids, and each grid is responsible for predicting bounding boxes that the center point falls on this grid. As the grid number becomes smaller, the size of each grid becomes bigger and hence have a better focus on bigger objects. The three different grid sizes thus allow the network to detect objects in various sizes. Each tensor’s depth in this block is 60 since the network needs to make three predictions with anchors in different width/height ratios for each grid. For each prediction, bp is 4 digits and apred is 16 digits. Thus the tensor has 3*(4+16) = 60 channels. It needs to be notice that the bounding box prediction bp = {tx, ty, tw, th} are relative to the location of the grid and size of the anchor. The actual location and size of the bounding box bpred need to be calculated with the following equations.
x = 2 ∗ σ(tx)− 0.5 + cx y = 2 ∗ σ(ty)− 0.5 + cy w = pw ∗ (2 ∗ σ(tw))2 h = ph ∗ (2 ∗ σ(th))2
Where cx, cy indicate the location of the top-left corner of a grid cell and pw, ph indicate the width and height of the anchor.
3.2.3 ATTRIBUTE PREDICTION
As mentioned in the previous section, block Tb also needs to predict the 16-dimensional attribute vector for each object. Unlike other works describing objects with semantic vectors that learn by Word2Vec or FastText, our attribute vector is carried out by human eye evaluation. The elements in the attribute vector are some common colors and shapes that appear in all objects. There are two reasons that we took a different approach. (1) Class names such as “people” in previous works can be easily translated into semantic vectors using existing algorithms. While object names in YCB Video dataset are instance-specific, such as “master chef can”, cannot be directly translated. (2) There is no visual variation for each object, and we can determine the attributes an object contains by human evaluation. The 16-dimensional attribute vector contains: white, blue, red, yellow, silver, black, brown, bottle, cup, can, clamp, slim, circle, cylinder, box, rectangular. Each object gi will be described by several attributes in the form of one-hot embedding. For example, object ‘red cup’ has the attributes of red, cup, circle, cylinder will have the attribute vector [0 0 1 0 0 0 0 0 1 0 0 0 1 1 0 0]. The corresponding location of red, cup, circle, cylinder is labelled ‘1’ and the rest are labelled ‘0’. The predicted output apred is a 16-dimensional vector of floating-point numbers. Each number’s value is between 0 and 1, indicate the confidence level of an attribute.
3.2.4 CONFIDENCE PREDICTION
After bounding box and attribute prediction, block Tf and block Tb are concatenated together to form block Tc. Our objectness prediction layer is learned from the concatenated layer Tc. In the original YOLOv5 detection layer, objectness confidence is learned in the same block as Tb. However, learning objectness confidence only from the feature layer will cause the network to only recognize seen objects and treat all unseen objects as background. Thus, we concatenate the Tf block and Tb block together to let the network also learns from the bounding box and attribute predictions. In this case, the network will be able to recognize unseen objects by the attributes they have. Detectors in Tp only have three channels, each channel of a grid cell is the confidence score
of the corresponding bounding box. The network will make (80*80+40*40+20*20) *3 = 25200 predictions in total. In the end, the output block To is the concatenation of bounding box, attributes and objectness prediction.
3.3 LOSS FUNCTION DESIGN
The total loss in our algorithm is composed of three parts: localization loss, attribute loss and objectness loss. In the following sections, we will show how the loss functions are designed and implemented.
3.3.1 LOCALIZATION LOSS
In YOLOV5, the localization loss is calculated by cIoU loss proposed by Zheng et al. (2021). Compare to the original IoU loss, cIoU loss is more precise and converges much faster. To calculate the cIoU loss, we need to calculate some parameters first:
IoU = Areapred ∩Areagt Areapred ∪Areagt
α = υ
(1− IoU) + υ
υ = 4 π2 ∗ (arctanwgt hgt − arctanwpred hpred )2
When the loss is calculated, not all bounding box predictions are used. A predicted bounding box is used only when its center point falls into the same grid cell as the ground truth bounding box’s center point and has the highest IoU among three predictions in the same grid cell. We denote the selection of bounding box predictions using λi, it is set to 1 when selected otherwise 0. Since our output has three different feature levels, we define n as the level number. The total number of predictions m is equal to 80*80*3=19200, 40*40*3=4800, 20*20*3=1200 when n is equal to 1, 2, 3 respectively. The formal localization loss is defined as the summation of mean cIoU loss in each layer, shown in the following function. Where d is the distance between two boxes’ center and c is the diagonal length of the minimum enclosing box of two boxes.
Lloc = n∑ j=1 ( 1 m m∑ i=1 λi(1− IoUi + d2i c2i + αiυi))
3.3.2 ATTRIBUTE LOSS
For calculating the attribute loss, we used Binary Cross Entropy (BCE) loss with sigmoid function ( σ). Since the ground-truth value of an attribute egt is either 0 or 1, the predicted attribute value epred need to be passed into a sigmoid function first to regulate the number to between 0 and 1. A new term z is introduced in this function, and it represents the total number of attributes in the vector, which is 16. A bounding box’s attribute loss is the summation of BCE loss on every attribute term. All other symbols remain the same meaning as in section 3.3.1.
Latt = n∑ j=1 ( 1 m m∑ i=1 z∑ k=1 λi(e gt i,k(− log(σ(e pred i,k )) + (1− e gt i,k)(− log(1− σ(e pred i,k ))))
3.3.3 OBJECTNESS LOSS
Different from localization loss and attribute loss, the objectness loss is calculated from all predictions rather than positive predictions only. Thus, the term λi is dropped in the objectness calculation. BCE loss with sigmoid function is also used in the objectness loss calculation. pgt is the ground truth probability of the presence of an object in the bounding box, which equals 1 when an object is present and 0 otherwise. ppred is the predicted confidence score, regulated between 0 and 1 with the sigmoid function.
Lobj = n∑ j=1 ( 1 m m∑ i=1 pgti (− log(σ(p pred i )) + (1− p gt i )(− log(1− σ(p pred i )))
4 EXPERIMENTS
4.1 DATASET SETTING
In the following Table 1 we show how the original YCB Video dataset is splitted to our train, validation and test dataset. The YCB Video dataset is consist of 92 videos, and each has thousands of frames. 21 daily objects are included in the dataset, and some of them are placed in the scene of a video. Since during the video taking, the setup of objects does not change, objects in a video record remain constant. Thus, once the unseen objects is picked from all objects, all images that contain any of these four objects need to be allocated to the test dataset. We picked four objects as unseen objects in our split, they are gelatin box, mustard bottle, pitcher base and power drill, labelled with bold text in Table 1. In terms of detection difficulty, gelating box and mustard bottle are easy, pitcher base is harder, and power drill is hardest. This conclusion is carried out based on the attributes they have compared to all the seen objects, it is also proved later with the detecting score.
After four unseen objects are picked, 31 videos that only contain seen objects are selected to be used in the train and validation dataset. The rest 61 videos that contain at least one unseen object are allocated to the test dataset. The 31 videos only contain seen objects have 45272 frames in total. We randomly picked 20% of them (9040 images) and put them into the validation dataset, the rest 80% (36232 images) are placed in the train dataset. For test dataset, all frames in the 61 videos (88664 images) are used. In Table 1, we also show the number of labels for each object in each dataset.
4.2 TESTING RESULT
During testing, all images in the test dataset were used. Since this work focus on detection only, we will only evaluate the recall rate of the algorithm. We define that if an object’s ground truth bounding
box (GT) and the predicted bounding box’s IoU is bigger than 0.5, the object is successfully detected, noted as True Positive (TP). The recall rate is defined as:
Recall = TP
GT
In the following table, the recall for each object is shown.
Table 2: Recall for each object
Number 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Recall 0.88 0.72 0.83 0.85 0.73 0.75 0.61 0.46 0.86 0.73 0.05 0.51 0.63 0.90 0.07 0.21 0.44 0.26 0.42 0.21 0.72
4.3 DISCUSSION
Base on the testing result, we can say that the algorithm works well when there are similar seen objects, but not for onjects that very different from seen objects. From Table 2, the first thing we notice is that the recall rates for different objects have a large variation. Some seen objects even have a lower recall rate than unseen objects. The main reason causing this is the unbalanced number of labels in the train dataset and the test dataset. For example, the number of train labels for sugar box is 0.75 times of the number of test labels, and its recall reached 0.83. For wood block, its number of train labels is only half the number of test labels, and its recall rate is only 0.21. Another reason that affects the recall rate is the variation of illumination. Since YCB Video dataset is consists of videos, images in the train dataset can only cover a very limited range of illumination conditions. Thus, the algorithm will perform worse on the test images with illumination conditions that have never been met before.
For the four unseen objects, the recall rate for object number 4 and 7 is higher than object number 10 and 14, which is similar to what we expected. Especially for object numer 4, its recall rate is even higher than many seen objects. Object number 4 has the highest recall rate among all unseen objects because its color and shape have commonly appeared on seen objects. However, attributes contained in object number 10 and 14 are hardly seen in the train dataset. Thus, we can conclude that seen objects with similar color or shape to unseen objects can increase the detection rate of unseen objects.
5 CONCLUSION
In this paper, we proposed to use a modified YOLOv5 neural network to perform generalized zeroshot detection on seen and unseen objects. We also proposed a novel splitting method for YCB Video dataset to train and test gZSD algorithms. By changing the final detection layers of YOLOv5, we have significantly improved its gZSD performance on the YCB Video dataset split with our proposal. For industrial robots that works in a flexible and dynamic manufacturing environment, our gZSD algorithm for detecting daily objects is a more feasible solution than the traditional vision algorithm that requires training for every object. In our experiment, we found that our algorithm is more sensitive to color rather than shapes. Thus, in the future, we can experiment on RGB-D images rather than RGB images to evaluate the improvement brought by the extra depth channel. | 1. What is the focus and contribution of the paper on zero-shot object detection?
2. What are the strengths of the proposed approach, particularly in terms of its organization and problem definition?
3. What are the weaknesses of the paper, especially regarding its reliance on existing methods and lack of novelty?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any concerns regarding the effectiveness of the proposed method in detecting unseen objects, and how could they be addressed through further research or experimentation? | Summary Of The Paper
Review | Summary Of The Paper
This paper aims to solve the object detection under the object manipulation scenario. More specifically, it attempts to design a zero-shot object detection algorithm that can detect unseen objects with the knowledge learnt from seen objects. The authors conduct experiments on YCB Video dataset to valid the effectivness of the proposed method.
Review
-Strengths: The paper is well-organized and the problem is also well-defined. -Weakness:
The proposed approach is mainly based on YOLO5 and the novelty is really limited. No new or novel zero shot learning algorithms are proposed to solve this specific problem.
There's no explanation about why the proposed model can detect the unseen objects.
No ablation study are given to validate the effectiveness of each proposed component. There's only on experiment on YCB Video dataset, and the comparison with other state-of-the-arts zero shot learning methods is missing. |
ICLR | Title
Zero-shot detection of daily objects in YCB video dataset
Abstract
To let robots be able to manipulate objects, they have to sense the location of objects. With the development of visual data collecting and processing technology, robots are gradually evolving to localize objects in a greater field of view rather than being limited to a small space where the object could appear. To train such a robot vision system, pictures of all the objects need to be taken under various orientations and illumination. In the traditional manufacturing environment, this is applicable since objects involved in the production process does not change frequently. However, in the vision of smart manufacturing and high-mix-low-volume production, parts and products for robots to handle may change frequently. Thus, it is unrealistic to re-training the vision system for new products and tasks. Under this situation, we discovered the necessity to introduce a hot concept which is zero-shot object detection. Zero-shot object detection is a subset of unsupervised learning, and it aims to detect novel objects in the image with the knowledge learned from and only from seen objects. With zero-shot object detection algorithm, time can be greatly saved from collecting training data and training the vision system. Previous works focus on detecting objects in outdoor scenes, such as bikes, car, people, and dogs. The detection of daily objects is actually more challenging since the knowledge can be learned from each object is very limited. In this work, we explore the zero-shot detection of daily objects in indoor scenes since the objects’ size and environment are closely related to the manufacturing setup. The YCB Video Dataset is used in this work, which contains 21 objects in various categories. To the best of our knowledge, no previous work has explored zero-shot detection in this object size level and on this dataset.
1 INTRODUCTION
Industrial robots have received more and more attention in the manufacturing industry due to the rising cost of human labour and decreasing cost of industrial robots (Carlisle, 2017). Since robots can handle heavy and repetitive jobs better than human, many manufacturing planets have replaced human labours on the production line with robots (Robla-Gómez et al., 2017). In today’s manufacturing pattern of mass production, an industrial robot is only in charge of a certain processing step with dedicated parts. This manufacturing scenario does not require the robot to change target objects to work with frequently. However, with the recent development of control and communication technologies, the manufacturing industry is gradually evolving to high-mix-low-volume production that provides personalized product for customers (Lu et al., 2020). This is also known as smart manufacturing, in this scenario, the manufacturing system will become more flexible. Instead of tied to a specific task, robots will be allocated to various tasks depend on demand. Which requires the robots to be able to recognize a wide range of objects that could be involved during production.
With the development over years, today’s object detection and recognition algorithms such as FasterRCNN (Ren et al., 2015), SSD (Liu et al., 2016), YOLO (Redmon et al., 2016) and EfficientDet(Tan et al., 2020) have reached a high performance. With enough collected training data, those offthe-shelf algorithms can be easily applied to the robots’ vision system to recognize and localize objects involved in the production process. However, the data collection, labelling and training of a neural network is a time-consuming process and requires expertise in the machine vision filed. Even there are data generation method that can generate synthetic images and labels from CAD models
(Wohlhart & Lepetit, 2015), training a new neural network for additional new parts frequently is still not realistic in personalized production.
Zero-shot learning (ZSL) is a learning paradigm that learns knowledge from seen categories and apply the knowledge to new categories in order to recognize objects that is never seen before. In the work carried out by Zhang & Saligrama (2015) and Zhang & Saligrama (2016), zero-shot learning algorithms have already achieved a reasonable accuracy on classifying unseen objects. While zeroshot learning only aim to recognize unseen categories, a more realistic problem called generalized zero-shot detection (gZSL) is proposed by Xian et al. (2017), which aims to recognize both seen and unseen categories. However, gZSL still have flaws and cannot be directly applied to solve the previously mentioned issues. For both ZSL and gZSL, they only focus on recognizing the object in an image. Thus, a big assumption is made before applying those algorithms, which is only one object is appearing in the image and it is always located in the middle of the image. In this setting, ZSL and gZSL algorithms only need to analysis the categorical information in the image but not the location information. In other word, it takes the whole image as one object proposal. In real life, cameras attached on robots are always moving with the robot and cannot make sure the target object is always located in the middle of the camera’s view. Thus, generalized zero-shot detection (gZSD) and gZSL have to be combined to achieve a vision system that can detect and recognize unseen objects. Where gZSD oversees localize seen and unseen objects in the field of view and gZSL oversees the recognition and categorization of seen and unseen objects.
They are existing works in this field try to solve gZSD and gZSL at the same time to make the whole scenario completed. For example, Rahman et al. (2018) tried to combine Faster RCNN with gZSL module to generate object proposals and class predictions. Zhu et al. (2019) worked on YOLOv2, using a single stage detector to generate object proposals. Previous works have all worked on outdoor datasets such as MS COCO (Lin et al., 2014) and Pascal VOC(Everingham et al., 2010). Different objects in those datasets are categorized in categories. For example, man, woman and children are very different in term of their visual appearance but they are all allocated into the ‘people’ category. The varieties of objects in one category can increase the generality of the trained algorithm. In the case of detecting daily objects, datasets such as YCB Video (Xiang et al., 2017) is chosen. 3D objects in this dataset is similar to the size and characteristics of the objects could appear in production pipeline. However, each object is unique and cannot be categorized into categories in 3D object datasets. Which brings the challenge of harder generalization to unseen objects.
Regarding the problems and challenges we found in future manufacturing environment and gZSD, we propose to modify the base version of YOLOv5 (Jocher et al., 2021) to perform gZSL on the YCB Video dataset. Compare to two stage detectors such as Faster RCNN, one stage detectors such as YOLO and SSD are much faster to process. YOLOv5 as the latest version of YOLO series detectors, has been proved to be faster and better than previous versions. Compare to the work done by Zhu et al. (2019) which used a modified YOLOv2 (Redmon & Farhadi, 2017) to perform gZSD, YOLOv5 can output objects proposals in three different levels, thus have better coverage on object sizes. For training and testing our algorithm, four objects out of 21 objects in YCB Video dataset are picked as unseen objects. Any image that contains these four unseen objects will never appear during training but will be used during testing. For every object, their class labeling is translated into attribute vectors that’s contains the colour and shape information of each object. Thus, we transformed the classical single label problem into a multi-label problem to let the neural network learn attribute labels and apply it to unseen objects. It needs to be notice that, in this work, we only work on the gZSD problem but not gZSL. Which mean we only aim to localize seen and unseen object in the images by bounding boxes but not define the class label of the objects.
Our contributions in this paper are in three folds: 1.A novel neural network structure that based on YOLOv5 and able to perform generalized zero-shot detection. The output bounding boxes can be further combined with other gZSL algorithm to achieve full zero-shot object detection and recognition. 2.A novel splitting method for YCB Video dataset that splits the dataset by seen and unseen objects. This splitting can be used for both gZSD and gZSL research that related to daily objects. 3.A novel attribute labelling method for objects in YCB Video dataset. Covert the class labelling to 16 attributes that represents colour and shape information of an object for the neural network to learn.
2 RELATED WORK
2.1 OBJECT DETECTION
Researches on object detection and recognition have been developing rapidly in the past decade. The earliest image classification algorithm can be traced to the work published by Krizhevsky et al. (2012). Since then, the recognition speed and accuracy of image classification algorithms have been improving continuously. These algorithms can be divided into two categories: two-stage detectors and one-stage detectors. Two-stage detectors such as Faster RCNN Ren et al. (2015)and R-FCN Dai et al. (2016)generate object proposals by Region Proposal Network (RPN), then perform object classification based on these proposals. One-stage detectors like (Liu et al., 2016), YOLO (Redmon et al., 2016) and EfficientDet(Tan et al., 2020) generate object proposals and classify objects at the same time by dividing the image into grids. Thus, one-stage detectors’ processing speed is faster than two-stage detectors’ and gained more attention recently.
YOLOv5 (Jocher et al., 2021) used in this paper is the fifth version of the classical YOLO detector. However, there is still an argument in this field about whether this algorithm is qualified to be considered as the fifth version. The maintainer of YOLOv5 also has not published a paper to justify the algorithm’s ability. However, it is tested that on the MS COCO dataset (Lin et al., 2014), YOLOv5’s speed and accuracy both outperformed the state-of-the-art algorithm, which is Google’s EfficientDet. YOLOv5 is composed of three parts: Backbone, Neck and Head. When an image is passed into the network, it is first processed by the DarkNet (Bochkovskiy et al., 2020) backbone. Then passed into the PANet (Wang et al., 2019) neck, which processes and split the feature map into three different feature levels to have better coverage on objects in different sizes. Finally, the YOLO detector head will output predictions in three levels base on the feature maps. In this work, we will use YOLOv5 as the base algorithm, modify the detectors in the head part of the neural network, enable it to detect both seen and unseen objects.
2.2 ZERO-SHOT LEARNING
Given images with class labels, zero-shot learning (ZSL) aims to classify an unseen class based on knowledge learned from seen classes (Fu et al., 2018). ZSL works by learning semantic information from seen classes and reassemble the semantic attributes to predict unseen classes (Zhang & Saligrama, 2016)(Rahman et al., 2018). In other words, the algorithm learns the mapping from the visual domain to the semantic domain during learning and make predictions by mapping from the semantic domain to the visual domain. However, ZSL algorithms are designed for recognizing unseen classes, their performance degrades when both seen and unseen classes need to be recognized.Xian et al. (2017) proposed generalized ZSL (gZSL), which released the constraint of recognition targets and included seen classes as well. However, ZSL and gZSL both assume that only one target exists in an image and it is located right in the middle of the image. They still lack the ability to isolate an object from the background or the occlusion of other objects.
2.3 ZERO-SHOT DETECTION AND RECOGNITION
There are several methods proposed recently to solve the zero-shot detection and recognition problem. These algorithms can not only detect the location of seen and unseen objects in an image but can also classify them. Most of them took the approach with two-stage detectors. In the research carried out by Bansal et al. (2018), Edge-Box was used to generate regional proposals, and Regional Proposal Network (RPN) was used in work done by Rahman et al. (2018). Using two-stage detectors is an easier way to achieve generalized zero-shot detection (gZSL) since these proposal generators do not need to be trained to work with unseen objects. They will generate proposals regardless of the content inside the bounding box. The confidence and class prediction are handled by following gZSL network. However, these methods are inevitably slow due to the workload of class evaluation that comes with a large number of regional proposals. Zhu et al. (2019) proposed to use YOLOv2 as the backbone to detect and classify unseen objects. The use of one-stage detector made an improvement in terms of detection speed compared to two-stage detectors.
The mentioned methods above have all focused on outdoor scenes, and their commonly used datasets are MS COCO (Lin et al., 2014) and Pascal VOC (Everingham et al., 2010). The objects inside these
two datasets have a high variability within each class. For a robot that works in an indoor environment, recognizing objects at class level is not good enough since objects belong to the same category may have very different usage. For example, a housekeeping robot should be able to differentiate the blue cup and the pink cup and hand them to a boy and a girl, respectively, rather than recognize them both as cups. Abdalwhab & Liu (2019) have tried to use SUN RGB-D dataset (Song et al., 2015) to perform gZSD in an indoor environment. However, objects in the SUN RGB-D dataset is labelled by class and in furniture size. In this work, we will use the YCB Video dataset (Xiang et al., 2017), which includes 21 distinctive objects in desktop size. The object size and environment in YCB Video dataset are closer related to the setup we may encounter in the manufacturing environment.
3 METHODOLOGY
3.1 PROBLEM DEFINITION
Assuming we have n objects labelled as gi = {bi,ai}ni . Where the 4-dimensional vector bi = {xi, yi, wi, hi} denotes the ground truth bounding boxes’center point location x, y, width and height w, h. ai is a 16-dimensional vector describes the feature attribute of the object gi. In the training dataset, all objects are all seen objects, represented by gi ∈ Oseen. The valadation dataset is also composed by seen objects only. In the test dataset, both seen Oseen objects and unseen objects Ounseen are present, Oseen∩Ounseen = ∅. The goal of this work is to predict {bpred, cpred,apred} for gpred ∈ Oseen ∪Ounseen. The extra term cpred represents the confidence level of the existance of an object within bounding box bpred.
3.2 NETWORK ARCHITECTURE
The simplified architecture of our YOLOv5-ZS network is shown in Figure 1. The network takes an RGB image as input and output predictions in three feature levels. The input image size is 640*640*3 in our network since all images in the YCB Video dataset are 640*480 pixels. They are padded with grey pixels on top and bottom to become a square shape. The image is first processed by the Backbone (DarkNet) and Neck (PANet). The size of DarkNet and PANet is changeable in YOLOv5. By changing the number of convolutional layers and feature map depth, four versions of YOLOv5can be created, and they are YOLOv5s (small), YOLOv5m (medium), YOLOv5l (large) and YOLOv5x (extra-large). As the network becomes bigger and deeper, their detection accuracy will increase, but the detection speed decreases. In our work, we chose to use YOLOv5s. Tensor Tf represents the block of are extracted features in three levels, they are passed to the following blocks: bounding box and attribute prediction (Tb), feature concatenation (Tc), objectness prediction (Tp) and output concatenation (To). They are detailly explained in the following sections.
3.2.1 FEATURE EXTRACTION
The Tf block is composed of three tensors, and its structure is inherited from PANet. Path Aggregation Network (PANet) proposed by Wang et al. (2019) is a network that has both top-down and bottom-up dataflow. The bi-directional data flow ensures the image features are kept better in the later convolutional layers. By using PANet, we can also get image features in three levels. As Figure 1 shows, the feature tensors are in size 80*80*128, 40*40*256, 20*20*512. As the network goes deeper, the feature map gets smaller and deeper. From this block, each data block will consist of three tensors. However, for easier representation and clearer graph, we represent each later block as a tensor T.
3.2.2 OBJECT LOCALIZATION
The location of bounding boxes is predicted in block Tb. Each tensor in this block has the same depth of 60. In YOLO series detectors, the width and height of the detection layer represent the number of grids on the image. For example, 80*80 means the image is evenly divided into 6400 grids, and each grid is responsible for predicting bounding boxes that the center point falls on this grid. As the grid number becomes smaller, the size of each grid becomes bigger and hence have a better focus on bigger objects. The three different grid sizes thus allow the network to detect objects in various sizes. Each tensor’s depth in this block is 60 since the network needs to make three predictions with anchors in different width/height ratios for each grid. For each prediction, bp is 4 digits and apred is 16 digits. Thus the tensor has 3*(4+16) = 60 channels. It needs to be notice that the bounding box prediction bp = {tx, ty, tw, th} are relative to the location of the grid and size of the anchor. The actual location and size of the bounding box bpred need to be calculated with the following equations.
x = 2 ∗ σ(tx)− 0.5 + cx y = 2 ∗ σ(ty)− 0.5 + cy w = pw ∗ (2 ∗ σ(tw))2 h = ph ∗ (2 ∗ σ(th))2
Where cx, cy indicate the location of the top-left corner of a grid cell and pw, ph indicate the width and height of the anchor.
3.2.3 ATTRIBUTE PREDICTION
As mentioned in the previous section, block Tb also needs to predict the 16-dimensional attribute vector for each object. Unlike other works describing objects with semantic vectors that learn by Word2Vec or FastText, our attribute vector is carried out by human eye evaluation. The elements in the attribute vector are some common colors and shapes that appear in all objects. There are two reasons that we took a different approach. (1) Class names such as “people” in previous works can be easily translated into semantic vectors using existing algorithms. While object names in YCB Video dataset are instance-specific, such as “master chef can”, cannot be directly translated. (2) There is no visual variation for each object, and we can determine the attributes an object contains by human evaluation. The 16-dimensional attribute vector contains: white, blue, red, yellow, silver, black, brown, bottle, cup, can, clamp, slim, circle, cylinder, box, rectangular. Each object gi will be described by several attributes in the form of one-hot embedding. For example, object ‘red cup’ has the attributes of red, cup, circle, cylinder will have the attribute vector [0 0 1 0 0 0 0 0 1 0 0 0 1 1 0 0]. The corresponding location of red, cup, circle, cylinder is labelled ‘1’ and the rest are labelled ‘0’. The predicted output apred is a 16-dimensional vector of floating-point numbers. Each number’s value is between 0 and 1, indicate the confidence level of an attribute.
3.2.4 CONFIDENCE PREDICTION
After bounding box and attribute prediction, block Tf and block Tb are concatenated together to form block Tc. Our objectness prediction layer is learned from the concatenated layer Tc. In the original YOLOv5 detection layer, objectness confidence is learned in the same block as Tb. However, learning objectness confidence only from the feature layer will cause the network to only recognize seen objects and treat all unseen objects as background. Thus, we concatenate the Tf block and Tb block together to let the network also learns from the bounding box and attribute predictions. In this case, the network will be able to recognize unseen objects by the attributes they have. Detectors in Tp only have three channels, each channel of a grid cell is the confidence score
of the corresponding bounding box. The network will make (80*80+40*40+20*20) *3 = 25200 predictions in total. In the end, the output block To is the concatenation of bounding box, attributes and objectness prediction.
3.3 LOSS FUNCTION DESIGN
The total loss in our algorithm is composed of three parts: localization loss, attribute loss and objectness loss. In the following sections, we will show how the loss functions are designed and implemented.
3.3.1 LOCALIZATION LOSS
In YOLOV5, the localization loss is calculated by cIoU loss proposed by Zheng et al. (2021). Compare to the original IoU loss, cIoU loss is more precise and converges much faster. To calculate the cIoU loss, we need to calculate some parameters first:
IoU = Areapred ∩Areagt Areapred ∪Areagt
α = υ
(1− IoU) + υ
υ = 4 π2 ∗ (arctanwgt hgt − arctanwpred hpred )2
When the loss is calculated, not all bounding box predictions are used. A predicted bounding box is used only when its center point falls into the same grid cell as the ground truth bounding box’s center point and has the highest IoU among three predictions in the same grid cell. We denote the selection of bounding box predictions using λi, it is set to 1 when selected otherwise 0. Since our output has three different feature levels, we define n as the level number. The total number of predictions m is equal to 80*80*3=19200, 40*40*3=4800, 20*20*3=1200 when n is equal to 1, 2, 3 respectively. The formal localization loss is defined as the summation of mean cIoU loss in each layer, shown in the following function. Where d is the distance between two boxes’ center and c is the diagonal length of the minimum enclosing box of two boxes.
Lloc = n∑ j=1 ( 1 m m∑ i=1 λi(1− IoUi + d2i c2i + αiυi))
3.3.2 ATTRIBUTE LOSS
For calculating the attribute loss, we used Binary Cross Entropy (BCE) loss with sigmoid function ( σ). Since the ground-truth value of an attribute egt is either 0 or 1, the predicted attribute value epred need to be passed into a sigmoid function first to regulate the number to between 0 and 1. A new term z is introduced in this function, and it represents the total number of attributes in the vector, which is 16. A bounding box’s attribute loss is the summation of BCE loss on every attribute term. All other symbols remain the same meaning as in section 3.3.1.
Latt = n∑ j=1 ( 1 m m∑ i=1 z∑ k=1 λi(e gt i,k(− log(σ(e pred i,k )) + (1− e gt i,k)(− log(1− σ(e pred i,k ))))
3.3.3 OBJECTNESS LOSS
Different from localization loss and attribute loss, the objectness loss is calculated from all predictions rather than positive predictions only. Thus, the term λi is dropped in the objectness calculation. BCE loss with sigmoid function is also used in the objectness loss calculation. pgt is the ground truth probability of the presence of an object in the bounding box, which equals 1 when an object is present and 0 otherwise. ppred is the predicted confidence score, regulated between 0 and 1 with the sigmoid function.
Lobj = n∑ j=1 ( 1 m m∑ i=1 pgti (− log(σ(p pred i )) + (1− p gt i )(− log(1− σ(p pred i )))
4 EXPERIMENTS
4.1 DATASET SETTING
In the following Table 1 we show how the original YCB Video dataset is splitted to our train, validation and test dataset. The YCB Video dataset is consist of 92 videos, and each has thousands of frames. 21 daily objects are included in the dataset, and some of them are placed in the scene of a video. Since during the video taking, the setup of objects does not change, objects in a video record remain constant. Thus, once the unseen objects is picked from all objects, all images that contain any of these four objects need to be allocated to the test dataset. We picked four objects as unseen objects in our split, they are gelatin box, mustard bottle, pitcher base and power drill, labelled with bold text in Table 1. In terms of detection difficulty, gelating box and mustard bottle are easy, pitcher base is harder, and power drill is hardest. This conclusion is carried out based on the attributes they have compared to all the seen objects, it is also proved later with the detecting score.
After four unseen objects are picked, 31 videos that only contain seen objects are selected to be used in the train and validation dataset. The rest 61 videos that contain at least one unseen object are allocated to the test dataset. The 31 videos only contain seen objects have 45272 frames in total. We randomly picked 20% of them (9040 images) and put them into the validation dataset, the rest 80% (36232 images) are placed in the train dataset. For test dataset, all frames in the 61 videos (88664 images) are used. In Table 1, we also show the number of labels for each object in each dataset.
4.2 TESTING RESULT
During testing, all images in the test dataset were used. Since this work focus on detection only, we will only evaluate the recall rate of the algorithm. We define that if an object’s ground truth bounding
box (GT) and the predicted bounding box’s IoU is bigger than 0.5, the object is successfully detected, noted as True Positive (TP). The recall rate is defined as:
Recall = TP
GT
In the following table, the recall for each object is shown.
Table 2: Recall for each object
Number 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Recall 0.88 0.72 0.83 0.85 0.73 0.75 0.61 0.46 0.86 0.73 0.05 0.51 0.63 0.90 0.07 0.21 0.44 0.26 0.42 0.21 0.72
4.3 DISCUSSION
Base on the testing result, we can say that the algorithm works well when there are similar seen objects, but not for onjects that very different from seen objects. From Table 2, the first thing we notice is that the recall rates for different objects have a large variation. Some seen objects even have a lower recall rate than unseen objects. The main reason causing this is the unbalanced number of labels in the train dataset and the test dataset. For example, the number of train labels for sugar box is 0.75 times of the number of test labels, and its recall reached 0.83. For wood block, its number of train labels is only half the number of test labels, and its recall rate is only 0.21. Another reason that affects the recall rate is the variation of illumination. Since YCB Video dataset is consists of videos, images in the train dataset can only cover a very limited range of illumination conditions. Thus, the algorithm will perform worse on the test images with illumination conditions that have never been met before.
For the four unseen objects, the recall rate for object number 4 and 7 is higher than object number 10 and 14, which is similar to what we expected. Especially for object numer 4, its recall rate is even higher than many seen objects. Object number 4 has the highest recall rate among all unseen objects because its color and shape have commonly appeared on seen objects. However, attributes contained in object number 10 and 14 are hardly seen in the train dataset. Thus, we can conclude that seen objects with similar color or shape to unseen objects can increase the detection rate of unseen objects.
5 CONCLUSION
In this paper, we proposed to use a modified YOLOv5 neural network to perform generalized zeroshot detection on seen and unseen objects. We also proposed a novel splitting method for YCB Video dataset to train and test gZSD algorithms. By changing the final detection layers of YOLOv5, we have significantly improved its gZSD performance on the YCB Video dataset split with our proposal. For industrial robots that works in a flexible and dynamic manufacturing environment, our gZSD algorithm for detecting daily objects is a more feasible solution than the traditional vision algorithm that requires training for every object. In our experiment, we found that our algorithm is more sensitive to color rather than shapes. Thus, in the future, we can experiment on RGB-D images rather than RGB images to evaluate the improvement brought by the extra depth channel. | 1. What is the main contribution of the paper on zero-shot object instance detection?
2. What are the strengths and weaknesses of the proposed method, particularly in its design and modification of the YOLOv5 network?
3. How does the reviewer assess the paper's comparison with related work and ablation studies?
4. What are the minor issues in the paper regarding grammar and typos?
5. How does the reviewer evaluate the paper's results and their significance in the context of the problem tackled? | Summary Of The Paper
Review | Summary Of The Paper
This paper tries to tackle the zero-shot object instance detection problem, but without handling object classification, it is essentially a method for objectness detection. The paper uses a modified YOLOv5 network with an object attribute output branch. The object attributes are 16 binary attributes pre-defined and fixed by human evaluation. The method is evaluated on the YCB-Video dataset, where 4 out of 21 objects are held out as unseen objects for zero-shot evaluation. The paper reports unconvincing results, and no comparison with related work or ablation studies.
Review
Strengths: 1. The network is designed to output the object attribute vectors, which could be useful when the object set is fixed or strictly limited to these attributes.
Weaknesses: 1. The proposed method is a simple modification of the YOLOv5 network. The proposed object attribute prediction branch is questionable as the attribute slots are predefined, fixed and do not generalize to new attributes. 2. The proposed method does not give object ID but only bounding boxes. Thus the method is reduced to an objectness detector. There are many possible baselines (e.g. pretrained YOLOv5 as objectness detector) but the paper compares to none. 3. The paper does not mention or compare to relevant previous works like [1] and [2] 4. The paper does not conduct any ablation study, with the proposed method empirically unverified. 5. The paper only evaluates one dataset and uses the recall as the sole evaluation metrics, ignoring the harm of false positives. 6. The results are not good, only on average around 0.3 in recall on unseen objects.
Minor issues (grammar, typos, etc): 1. “since the knowledge can be learned from each object is very limited” -> “since the knowledge that can be learned from each object is very limited” 2. “ZSL and gZSL algorithms only need to analysis the categorical information” -> “ZSL and gZSL algorithms only need to analyze the categorical information” 3. “Covert the class labelling to 16 attributes that represents” -> “Convert the class labelling to 16 attributes that represents” 4. There are still many grammar/typo issues in the paper. I suggest the authors thoroughly check them, although this is not considered in the evaluation.
[1] P. Ammirato, C.-Y. Fu, M. Shvets, J. Kosecka, and A. C. Berg, “Target Driven Instance Detection,” arXiv:1803.04610 [cs], Oct. 2019, Accessed: May 19, 2020. [Online]. Available: http://arxiv.org/abs/1803.04610 [2] J.-P. Mercier, M. Garon, P. Giguère, and J.-F. Lalonde, “Deep Template-based Object Instance Detection,” arXiv:1911.11822 [cs], Nov. 2020, Accessed: Nov. 17, 2020. [Online]. Available: http://arxiv.org/abs/1911.11822 |
ICLR | Title
Zero-shot detection of daily objects in YCB video dataset
Abstract
To let robots be able to manipulate objects, they have to sense the location of objects. With the development of visual data collecting and processing technology, robots are gradually evolving to localize objects in a greater field of view rather than being limited to a small space where the object could appear. To train such a robot vision system, pictures of all the objects need to be taken under various orientations and illumination. In the traditional manufacturing environment, this is applicable since objects involved in the production process does not change frequently. However, in the vision of smart manufacturing and high-mix-low-volume production, parts and products for robots to handle may change frequently. Thus, it is unrealistic to re-training the vision system for new products and tasks. Under this situation, we discovered the necessity to introduce a hot concept which is zero-shot object detection. Zero-shot object detection is a subset of unsupervised learning, and it aims to detect novel objects in the image with the knowledge learned from and only from seen objects. With zero-shot object detection algorithm, time can be greatly saved from collecting training data and training the vision system. Previous works focus on detecting objects in outdoor scenes, such as bikes, car, people, and dogs. The detection of daily objects is actually more challenging since the knowledge can be learned from each object is very limited. In this work, we explore the zero-shot detection of daily objects in indoor scenes since the objects’ size and environment are closely related to the manufacturing setup. The YCB Video Dataset is used in this work, which contains 21 objects in various categories. To the best of our knowledge, no previous work has explored zero-shot detection in this object size level and on this dataset.
1 INTRODUCTION
Industrial robots have received more and more attention in the manufacturing industry due to the rising cost of human labour and decreasing cost of industrial robots (Carlisle, 2017). Since robots can handle heavy and repetitive jobs better than human, many manufacturing planets have replaced human labours on the production line with robots (Robla-Gómez et al., 2017). In today’s manufacturing pattern of mass production, an industrial robot is only in charge of a certain processing step with dedicated parts. This manufacturing scenario does not require the robot to change target objects to work with frequently. However, with the recent development of control and communication technologies, the manufacturing industry is gradually evolving to high-mix-low-volume production that provides personalized product for customers (Lu et al., 2020). This is also known as smart manufacturing, in this scenario, the manufacturing system will become more flexible. Instead of tied to a specific task, robots will be allocated to various tasks depend on demand. Which requires the robots to be able to recognize a wide range of objects that could be involved during production.
With the development over years, today’s object detection and recognition algorithms such as FasterRCNN (Ren et al., 2015), SSD (Liu et al., 2016), YOLO (Redmon et al., 2016) and EfficientDet(Tan et al., 2020) have reached a high performance. With enough collected training data, those offthe-shelf algorithms can be easily applied to the robots’ vision system to recognize and localize objects involved in the production process. However, the data collection, labelling and training of a neural network is a time-consuming process and requires expertise in the machine vision filed. Even there are data generation method that can generate synthetic images and labels from CAD models
(Wohlhart & Lepetit, 2015), training a new neural network for additional new parts frequently is still not realistic in personalized production.
Zero-shot learning (ZSL) is a learning paradigm that learns knowledge from seen categories and apply the knowledge to new categories in order to recognize objects that is never seen before. In the work carried out by Zhang & Saligrama (2015) and Zhang & Saligrama (2016), zero-shot learning algorithms have already achieved a reasonable accuracy on classifying unseen objects. While zeroshot learning only aim to recognize unseen categories, a more realistic problem called generalized zero-shot detection (gZSL) is proposed by Xian et al. (2017), which aims to recognize both seen and unseen categories. However, gZSL still have flaws and cannot be directly applied to solve the previously mentioned issues. For both ZSL and gZSL, they only focus on recognizing the object in an image. Thus, a big assumption is made before applying those algorithms, which is only one object is appearing in the image and it is always located in the middle of the image. In this setting, ZSL and gZSL algorithms only need to analysis the categorical information in the image but not the location information. In other word, it takes the whole image as one object proposal. In real life, cameras attached on robots are always moving with the robot and cannot make sure the target object is always located in the middle of the camera’s view. Thus, generalized zero-shot detection (gZSD) and gZSL have to be combined to achieve a vision system that can detect and recognize unseen objects. Where gZSD oversees localize seen and unseen objects in the field of view and gZSL oversees the recognition and categorization of seen and unseen objects.
They are existing works in this field try to solve gZSD and gZSL at the same time to make the whole scenario completed. For example, Rahman et al. (2018) tried to combine Faster RCNN with gZSL module to generate object proposals and class predictions. Zhu et al. (2019) worked on YOLOv2, using a single stage detector to generate object proposals. Previous works have all worked on outdoor datasets such as MS COCO (Lin et al., 2014) and Pascal VOC(Everingham et al., 2010). Different objects in those datasets are categorized in categories. For example, man, woman and children are very different in term of their visual appearance but they are all allocated into the ‘people’ category. The varieties of objects in one category can increase the generality of the trained algorithm. In the case of detecting daily objects, datasets such as YCB Video (Xiang et al., 2017) is chosen. 3D objects in this dataset is similar to the size and characteristics of the objects could appear in production pipeline. However, each object is unique and cannot be categorized into categories in 3D object datasets. Which brings the challenge of harder generalization to unseen objects.
Regarding the problems and challenges we found in future manufacturing environment and gZSD, we propose to modify the base version of YOLOv5 (Jocher et al., 2021) to perform gZSL on the YCB Video dataset. Compare to two stage detectors such as Faster RCNN, one stage detectors such as YOLO and SSD are much faster to process. YOLOv5 as the latest version of YOLO series detectors, has been proved to be faster and better than previous versions. Compare to the work done by Zhu et al. (2019) which used a modified YOLOv2 (Redmon & Farhadi, 2017) to perform gZSD, YOLOv5 can output objects proposals in three different levels, thus have better coverage on object sizes. For training and testing our algorithm, four objects out of 21 objects in YCB Video dataset are picked as unseen objects. Any image that contains these four unseen objects will never appear during training but will be used during testing. For every object, their class labeling is translated into attribute vectors that’s contains the colour and shape information of each object. Thus, we transformed the classical single label problem into a multi-label problem to let the neural network learn attribute labels and apply it to unseen objects. It needs to be notice that, in this work, we only work on the gZSD problem but not gZSL. Which mean we only aim to localize seen and unseen object in the images by bounding boxes but not define the class label of the objects.
Our contributions in this paper are in three folds: 1.A novel neural network structure that based on YOLOv5 and able to perform generalized zero-shot detection. The output bounding boxes can be further combined with other gZSL algorithm to achieve full zero-shot object detection and recognition. 2.A novel splitting method for YCB Video dataset that splits the dataset by seen and unseen objects. This splitting can be used for both gZSD and gZSL research that related to daily objects. 3.A novel attribute labelling method for objects in YCB Video dataset. Covert the class labelling to 16 attributes that represents colour and shape information of an object for the neural network to learn.
2 RELATED WORK
2.1 OBJECT DETECTION
Researches on object detection and recognition have been developing rapidly in the past decade. The earliest image classification algorithm can be traced to the work published by Krizhevsky et al. (2012). Since then, the recognition speed and accuracy of image classification algorithms have been improving continuously. These algorithms can be divided into two categories: two-stage detectors and one-stage detectors. Two-stage detectors such as Faster RCNN Ren et al. (2015)and R-FCN Dai et al. (2016)generate object proposals by Region Proposal Network (RPN), then perform object classification based on these proposals. One-stage detectors like (Liu et al., 2016), YOLO (Redmon et al., 2016) and EfficientDet(Tan et al., 2020) generate object proposals and classify objects at the same time by dividing the image into grids. Thus, one-stage detectors’ processing speed is faster than two-stage detectors’ and gained more attention recently.
YOLOv5 (Jocher et al., 2021) used in this paper is the fifth version of the classical YOLO detector. However, there is still an argument in this field about whether this algorithm is qualified to be considered as the fifth version. The maintainer of YOLOv5 also has not published a paper to justify the algorithm’s ability. However, it is tested that on the MS COCO dataset (Lin et al., 2014), YOLOv5’s speed and accuracy both outperformed the state-of-the-art algorithm, which is Google’s EfficientDet. YOLOv5 is composed of three parts: Backbone, Neck and Head. When an image is passed into the network, it is first processed by the DarkNet (Bochkovskiy et al., 2020) backbone. Then passed into the PANet (Wang et al., 2019) neck, which processes and split the feature map into three different feature levels to have better coverage on objects in different sizes. Finally, the YOLO detector head will output predictions in three levels base on the feature maps. In this work, we will use YOLOv5 as the base algorithm, modify the detectors in the head part of the neural network, enable it to detect both seen and unseen objects.
2.2 ZERO-SHOT LEARNING
Given images with class labels, zero-shot learning (ZSL) aims to classify an unseen class based on knowledge learned from seen classes (Fu et al., 2018). ZSL works by learning semantic information from seen classes and reassemble the semantic attributes to predict unseen classes (Zhang & Saligrama, 2016)(Rahman et al., 2018). In other words, the algorithm learns the mapping from the visual domain to the semantic domain during learning and make predictions by mapping from the semantic domain to the visual domain. However, ZSL algorithms are designed for recognizing unseen classes, their performance degrades when both seen and unseen classes need to be recognized.Xian et al. (2017) proposed generalized ZSL (gZSL), which released the constraint of recognition targets and included seen classes as well. However, ZSL and gZSL both assume that only one target exists in an image and it is located right in the middle of the image. They still lack the ability to isolate an object from the background or the occlusion of other objects.
2.3 ZERO-SHOT DETECTION AND RECOGNITION
There are several methods proposed recently to solve the zero-shot detection and recognition problem. These algorithms can not only detect the location of seen and unseen objects in an image but can also classify them. Most of them took the approach with two-stage detectors. In the research carried out by Bansal et al. (2018), Edge-Box was used to generate regional proposals, and Regional Proposal Network (RPN) was used in work done by Rahman et al. (2018). Using two-stage detectors is an easier way to achieve generalized zero-shot detection (gZSL) since these proposal generators do not need to be trained to work with unseen objects. They will generate proposals regardless of the content inside the bounding box. The confidence and class prediction are handled by following gZSL network. However, these methods are inevitably slow due to the workload of class evaluation that comes with a large number of regional proposals. Zhu et al. (2019) proposed to use YOLOv2 as the backbone to detect and classify unseen objects. The use of one-stage detector made an improvement in terms of detection speed compared to two-stage detectors.
The mentioned methods above have all focused on outdoor scenes, and their commonly used datasets are MS COCO (Lin et al., 2014) and Pascal VOC (Everingham et al., 2010). The objects inside these
two datasets have a high variability within each class. For a robot that works in an indoor environment, recognizing objects at class level is not good enough since objects belong to the same category may have very different usage. For example, a housekeeping robot should be able to differentiate the blue cup and the pink cup and hand them to a boy and a girl, respectively, rather than recognize them both as cups. Abdalwhab & Liu (2019) have tried to use SUN RGB-D dataset (Song et al., 2015) to perform gZSD in an indoor environment. However, objects in the SUN RGB-D dataset is labelled by class and in furniture size. In this work, we will use the YCB Video dataset (Xiang et al., 2017), which includes 21 distinctive objects in desktop size. The object size and environment in YCB Video dataset are closer related to the setup we may encounter in the manufacturing environment.
3 METHODOLOGY
3.1 PROBLEM DEFINITION
Assuming we have n objects labelled as gi = {bi,ai}ni . Where the 4-dimensional vector bi = {xi, yi, wi, hi} denotes the ground truth bounding boxes’center point location x, y, width and height w, h. ai is a 16-dimensional vector describes the feature attribute of the object gi. In the training dataset, all objects are all seen objects, represented by gi ∈ Oseen. The valadation dataset is also composed by seen objects only. In the test dataset, both seen Oseen objects and unseen objects Ounseen are present, Oseen∩Ounseen = ∅. The goal of this work is to predict {bpred, cpred,apred} for gpred ∈ Oseen ∪Ounseen. The extra term cpred represents the confidence level of the existance of an object within bounding box bpred.
3.2 NETWORK ARCHITECTURE
The simplified architecture of our YOLOv5-ZS network is shown in Figure 1. The network takes an RGB image as input and output predictions in three feature levels. The input image size is 640*640*3 in our network since all images in the YCB Video dataset are 640*480 pixels. They are padded with grey pixels on top and bottom to become a square shape. The image is first processed by the Backbone (DarkNet) and Neck (PANet). The size of DarkNet and PANet is changeable in YOLOv5. By changing the number of convolutional layers and feature map depth, four versions of YOLOv5can be created, and they are YOLOv5s (small), YOLOv5m (medium), YOLOv5l (large) and YOLOv5x (extra-large). As the network becomes bigger and deeper, their detection accuracy will increase, but the detection speed decreases. In our work, we chose to use YOLOv5s. Tensor Tf represents the block of are extracted features in three levels, they are passed to the following blocks: bounding box and attribute prediction (Tb), feature concatenation (Tc), objectness prediction (Tp) and output concatenation (To). They are detailly explained in the following sections.
3.2.1 FEATURE EXTRACTION
The Tf block is composed of three tensors, and its structure is inherited from PANet. Path Aggregation Network (PANet) proposed by Wang et al. (2019) is a network that has both top-down and bottom-up dataflow. The bi-directional data flow ensures the image features are kept better in the later convolutional layers. By using PANet, we can also get image features in three levels. As Figure 1 shows, the feature tensors are in size 80*80*128, 40*40*256, 20*20*512. As the network goes deeper, the feature map gets smaller and deeper. From this block, each data block will consist of three tensors. However, for easier representation and clearer graph, we represent each later block as a tensor T.
3.2.2 OBJECT LOCALIZATION
The location of bounding boxes is predicted in block Tb. Each tensor in this block has the same depth of 60. In YOLO series detectors, the width and height of the detection layer represent the number of grids on the image. For example, 80*80 means the image is evenly divided into 6400 grids, and each grid is responsible for predicting bounding boxes that the center point falls on this grid. As the grid number becomes smaller, the size of each grid becomes bigger and hence have a better focus on bigger objects. The three different grid sizes thus allow the network to detect objects in various sizes. Each tensor’s depth in this block is 60 since the network needs to make three predictions with anchors in different width/height ratios for each grid. For each prediction, bp is 4 digits and apred is 16 digits. Thus the tensor has 3*(4+16) = 60 channels. It needs to be notice that the bounding box prediction bp = {tx, ty, tw, th} are relative to the location of the grid and size of the anchor. The actual location and size of the bounding box bpred need to be calculated with the following equations.
x = 2 ∗ σ(tx)− 0.5 + cx y = 2 ∗ σ(ty)− 0.5 + cy w = pw ∗ (2 ∗ σ(tw))2 h = ph ∗ (2 ∗ σ(th))2
Where cx, cy indicate the location of the top-left corner of a grid cell and pw, ph indicate the width and height of the anchor.
3.2.3 ATTRIBUTE PREDICTION
As mentioned in the previous section, block Tb also needs to predict the 16-dimensional attribute vector for each object. Unlike other works describing objects with semantic vectors that learn by Word2Vec or FastText, our attribute vector is carried out by human eye evaluation. The elements in the attribute vector are some common colors and shapes that appear in all objects. There are two reasons that we took a different approach. (1) Class names such as “people” in previous works can be easily translated into semantic vectors using existing algorithms. While object names in YCB Video dataset are instance-specific, such as “master chef can”, cannot be directly translated. (2) There is no visual variation for each object, and we can determine the attributes an object contains by human evaluation. The 16-dimensional attribute vector contains: white, blue, red, yellow, silver, black, brown, bottle, cup, can, clamp, slim, circle, cylinder, box, rectangular. Each object gi will be described by several attributes in the form of one-hot embedding. For example, object ‘red cup’ has the attributes of red, cup, circle, cylinder will have the attribute vector [0 0 1 0 0 0 0 0 1 0 0 0 1 1 0 0]. The corresponding location of red, cup, circle, cylinder is labelled ‘1’ and the rest are labelled ‘0’. The predicted output apred is a 16-dimensional vector of floating-point numbers. Each number’s value is between 0 and 1, indicate the confidence level of an attribute.
3.2.4 CONFIDENCE PREDICTION
After bounding box and attribute prediction, block Tf and block Tb are concatenated together to form block Tc. Our objectness prediction layer is learned from the concatenated layer Tc. In the original YOLOv5 detection layer, objectness confidence is learned in the same block as Tb. However, learning objectness confidence only from the feature layer will cause the network to only recognize seen objects and treat all unseen objects as background. Thus, we concatenate the Tf block and Tb block together to let the network also learns from the bounding box and attribute predictions. In this case, the network will be able to recognize unseen objects by the attributes they have. Detectors in Tp only have three channels, each channel of a grid cell is the confidence score
of the corresponding bounding box. The network will make (80*80+40*40+20*20) *3 = 25200 predictions in total. In the end, the output block To is the concatenation of bounding box, attributes and objectness prediction.
3.3 LOSS FUNCTION DESIGN
The total loss in our algorithm is composed of three parts: localization loss, attribute loss and objectness loss. In the following sections, we will show how the loss functions are designed and implemented.
3.3.1 LOCALIZATION LOSS
In YOLOV5, the localization loss is calculated by cIoU loss proposed by Zheng et al. (2021). Compare to the original IoU loss, cIoU loss is more precise and converges much faster. To calculate the cIoU loss, we need to calculate some parameters first:
IoU = Areapred ∩Areagt Areapred ∪Areagt
α = υ
(1− IoU) + υ
υ = 4 π2 ∗ (arctanwgt hgt − arctanwpred hpred )2
When the loss is calculated, not all bounding box predictions are used. A predicted bounding box is used only when its center point falls into the same grid cell as the ground truth bounding box’s center point and has the highest IoU among three predictions in the same grid cell. We denote the selection of bounding box predictions using λi, it is set to 1 when selected otherwise 0. Since our output has three different feature levels, we define n as the level number. The total number of predictions m is equal to 80*80*3=19200, 40*40*3=4800, 20*20*3=1200 when n is equal to 1, 2, 3 respectively. The formal localization loss is defined as the summation of mean cIoU loss in each layer, shown in the following function. Where d is the distance between two boxes’ center and c is the diagonal length of the minimum enclosing box of two boxes.
Lloc = n∑ j=1 ( 1 m m∑ i=1 λi(1− IoUi + d2i c2i + αiυi))
3.3.2 ATTRIBUTE LOSS
For calculating the attribute loss, we used Binary Cross Entropy (BCE) loss with sigmoid function ( σ). Since the ground-truth value of an attribute egt is either 0 or 1, the predicted attribute value epred need to be passed into a sigmoid function first to regulate the number to between 0 and 1. A new term z is introduced in this function, and it represents the total number of attributes in the vector, which is 16. A bounding box’s attribute loss is the summation of BCE loss on every attribute term. All other symbols remain the same meaning as in section 3.3.1.
Latt = n∑ j=1 ( 1 m m∑ i=1 z∑ k=1 λi(e gt i,k(− log(σ(e pred i,k )) + (1− e gt i,k)(− log(1− σ(e pred i,k ))))
3.3.3 OBJECTNESS LOSS
Different from localization loss and attribute loss, the objectness loss is calculated from all predictions rather than positive predictions only. Thus, the term λi is dropped in the objectness calculation. BCE loss with sigmoid function is also used in the objectness loss calculation. pgt is the ground truth probability of the presence of an object in the bounding box, which equals 1 when an object is present and 0 otherwise. ppred is the predicted confidence score, regulated between 0 and 1 with the sigmoid function.
Lobj = n∑ j=1 ( 1 m m∑ i=1 pgti (− log(σ(p pred i )) + (1− p gt i )(− log(1− σ(p pred i )))
4 EXPERIMENTS
4.1 DATASET SETTING
In the following Table 1 we show how the original YCB Video dataset is splitted to our train, validation and test dataset. The YCB Video dataset is consist of 92 videos, and each has thousands of frames. 21 daily objects are included in the dataset, and some of them are placed in the scene of a video. Since during the video taking, the setup of objects does not change, objects in a video record remain constant. Thus, once the unseen objects is picked from all objects, all images that contain any of these four objects need to be allocated to the test dataset. We picked four objects as unseen objects in our split, they are gelatin box, mustard bottle, pitcher base and power drill, labelled with bold text in Table 1. In terms of detection difficulty, gelating box and mustard bottle are easy, pitcher base is harder, and power drill is hardest. This conclusion is carried out based on the attributes they have compared to all the seen objects, it is also proved later with the detecting score.
After four unseen objects are picked, 31 videos that only contain seen objects are selected to be used in the train and validation dataset. The rest 61 videos that contain at least one unseen object are allocated to the test dataset. The 31 videos only contain seen objects have 45272 frames in total. We randomly picked 20% of them (9040 images) and put them into the validation dataset, the rest 80% (36232 images) are placed in the train dataset. For test dataset, all frames in the 61 videos (88664 images) are used. In Table 1, we also show the number of labels for each object in each dataset.
4.2 TESTING RESULT
During testing, all images in the test dataset were used. Since this work focus on detection only, we will only evaluate the recall rate of the algorithm. We define that if an object’s ground truth bounding
box (GT) and the predicted bounding box’s IoU is bigger than 0.5, the object is successfully detected, noted as True Positive (TP). The recall rate is defined as:
Recall = TP
GT
In the following table, the recall for each object is shown.
Table 2: Recall for each object
Number 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Recall 0.88 0.72 0.83 0.85 0.73 0.75 0.61 0.46 0.86 0.73 0.05 0.51 0.63 0.90 0.07 0.21 0.44 0.26 0.42 0.21 0.72
4.3 DISCUSSION
Base on the testing result, we can say that the algorithm works well when there are similar seen objects, but not for onjects that very different from seen objects. From Table 2, the first thing we notice is that the recall rates for different objects have a large variation. Some seen objects even have a lower recall rate than unseen objects. The main reason causing this is the unbalanced number of labels in the train dataset and the test dataset. For example, the number of train labels for sugar box is 0.75 times of the number of test labels, and its recall reached 0.83. For wood block, its number of train labels is only half the number of test labels, and its recall rate is only 0.21. Another reason that affects the recall rate is the variation of illumination. Since YCB Video dataset is consists of videos, images in the train dataset can only cover a very limited range of illumination conditions. Thus, the algorithm will perform worse on the test images with illumination conditions that have never been met before.
For the four unseen objects, the recall rate for object number 4 and 7 is higher than object number 10 and 14, which is similar to what we expected. Especially for object numer 4, its recall rate is even higher than many seen objects. Object number 4 has the highest recall rate among all unseen objects because its color and shape have commonly appeared on seen objects. However, attributes contained in object number 10 and 14 are hardly seen in the train dataset. Thus, we can conclude that seen objects with similar color or shape to unseen objects can increase the detection rate of unseen objects.
5 CONCLUSION
In this paper, we proposed to use a modified YOLOv5 neural network to perform generalized zeroshot detection on seen and unseen objects. We also proposed a novel splitting method for YCB Video dataset to train and test gZSD algorithms. By changing the final detection layers of YOLOv5, we have significantly improved its gZSD performance on the YCB Video dataset split with our proposal. For industrial robots that works in a flexible and dynamic manufacturing environment, our gZSD algorithm for detecting daily objects is a more feasible solution than the traditional vision algorithm that requires training for every object. In our experiment, we found that our algorithm is more sensitive to color rather than shapes. Thus, in the future, we can experiment on RGB-D images rather than RGB images to evaluate the improvement brought by the extra depth channel. | 1. What is the focus of the paper, and what problem does it aim to solve?
2. What are the strengths and weaknesses of the proposed approach, particularly regarding its technical novelty and effectiveness?
3. How does the reviewer assess the significance of the second contribution claimed by the authors?
4. What are some concerns or suggestions regarding the language and grammar used in the manuscript?
5. Are there any questions or issues that the reviewer wants to bring up but haven't been mentioned yet? | Summary Of The Paper
Review | Summary Of The Paper
This paper tackles on a generalized zero-shot localization task. YOLOv5 network is slightly modified to detect pre-defined attributes. The proposed method is evaluated on YCB video dataset.
Review
Strength
The paper tackles on a practically very important problem. The problem is well-motivated from the viewpoint of a practical application.
Weakness
The technical novelty of the proposed method is very limited. Basically, the difference is only in the attribute prediction part, but the modification is very straightforward even in that part.
The experiment is not enough for supporting the effectiveness of the proposed method. It is not compared with any existing methods. There is little analysis.
The second contribution claimed by the authors is weak.
2.A novel splitting method for YCB Video dataset that splits the dataset by seen and unseen objects.
Actually, they just provide one of the arbitrary splits, and this cannot be regarded as contribution.
The paper claims that they tackle on generalized zero-shot detection, but I believe “localization” is more appropriate word than “detection” since the proposed method does not recognize object classes.
The manuscript needs careful proof-reading for English style and grammar issues. For example, but not limited to
robots will be allocated to various tasks depend on demand. Which requires the robots to be able to…
For every object, their class labeling is translated into attribute vectors that’s contains the colour and shape information of each object.
Tensor
T
f
represents the block of are extracted features in three levels, they are passed to the following blocks
It needs to be notice that…
our attribute vector is carried out by human eye evaluation
For example, object ‘red cup’ has the attributes of red, cup, circle, cylinder will have the attribute vector [0 0 1 0 0 0 0 0 1 0 0 0 1 1 0 0].
The 31 videos only contain seen objects have 45272 frames in total
YCB Video dataset is consists of … |
ICLR | Title
Understanding Trainable Sparse Coding with Matrix Factorization
Abstract
Sparse coding is a core building block in many data analysis and machine learning pipelines. Typically it is solved by relying on generic optimization techniques, such as the Iterative Soft Thresholding Algorithm and its accelerated version (ISTA, FISTA). These methods are optimal in the class of first-order methods for non-smooth, convex functions. However, they do not exploit the particular structure of the problem at hand nor the input data distribution. An acceleration using neural networks, coined LISTA, was proposed in Gregor & Le Cun (2010), which showed empirically that one could achieve high quality estimates with few iterations by modifying the parameters of the proximal splitting appropriately. In this paper we study the reasons for such acceleration. Our mathematical analysis reveals that it is related to a specific matrix factorization of the Gram kernel of the dictionary, which attempts to nearly diagonalise the kernel with a basis that produces a small perturbation of the `1 ball. When this factorization succeeds, we prove that the resulting splitting algorithm enjoys an improved convergence bound with respect to the non-adaptive version. Moreover, our analysis also shows that conditions for acceleration occur mostly at the beginning of the iterative process, consistent with numerical experiments. We further validate our analysis by showing that on dictionaries where this factorization does not exist, adaptive acceleration fails.
1 Introduction
Feature selection is a crucial point in high dimensional data analysis. Different techniques have been developed to tackle this problem efficiently, and amongst them sparsity has emerged as a leading paradigm. In statistics, the LASSO estimator (Tibshirani, 1996) provides a reliable way to select features and has been extensively studied in the last two decades (Hastie et al. (2015) and references therein). In machine learning and signal processing, sparse coding has made its way into several modern architectures, including large scale computer vision (Coates & Ng, 2011) and biologically inspired models (Cadieu & Olshausen, 2012). Also, Dictionary learning is a generic unsupervised learning method to perform nonlinear dimensionality reduction with efficient computational complexity (Mairal et al., 2009). All these techniques heavily rely on the resolution of `1-regularized least squares.
The `1-sparse coding problem is defined as solving, for a given input x ∈ Rn and dictionary D ∈ Rn×m, the following problem:
z∗(x) = arg min z Fx(z) ∆ =
1 2 ‖x−Dz‖2 + λ‖z‖1 . (1)
This problem is convex and can therefore be solved using convex optimization machinery. Proximal splitting methods (Beck & Teboulle, 2009) alternate between the minimization of the smooth and differentiable part using the gradient information and the minimization of the non-differentiable part using a proximal operator (Combettes & Bauschke, 2011). These methods can also be accelerated by considering a momentum term, as it is done in FISTA
∗Work done while appointed at UC Berkeley, Statistics Department (currently on leave)
(Beck & Teboulle, 2009; Nesterov, 2005). Coordinate descent (Friedman et al., 2007; Osher & Li, 2009) leverages the closed formula that can be derived for optimizing the problem (1) for one coordinate zi given that all the other are fixed. At each step of the algorithm, one coordinate is updated to its optimal value, which yields an inexpensive scheme to perform each step. The choice of the coordinate to update at each step is critical for the performance of the optimization procedure. Least Angle Regression (LARS) (Hesterberg et al., 2008) is another method that computes the whole LASSO regularization path. These algorithms all provide an optimization procedure that leverages the local properties of the cost function iteratively. They can be shown to be optimal among the class of first-order methods for generic convex, non-smooth functions (Bubeck, 2014).
But all these results are given in the worst case and do not use the distribution of the considered problem. One can thus wonder whether a more efficient algorithm to solve (1) exists for a fixed dictionary D and generic input x drawn from a certain input data distribution. In Gregor & Le Cun (2010), the authors introduced LISTA, a trained version of ISTA that adapts the parameters of the proximal splitting algorithm to approximate the solution of the LASSO using a finite number of steps. This method exploits the common structure of the problem to learn a better transform than the generic ISTA step. As ISTA is composed of a succession of linear operations and piecewise non linearities, the authors use the neural network framework and the backpropagation to derive an efficient procedure solving the LASSO problem. In Sprechmann et al. (2012), the authors extended LISTA to more generic sparse coding scenarios and showed that adaptive acceleration is possible under general input distributions and sparsity conditions.
In this paper, we are interested in the following question: Given a finite computational budget, what is the optimum estimator of the sparse coding? This question belongs to the general topic of computational tradeoffs in statistical inference. Randomized sketches (Alaoui & Mahoney, 2015; Yang et al., 2015) reduce the size of convex problems by projecting expensive kernel operators into random subspaces, and reveal a tradeoff between computational efficiency and statistical accuracy. Agarwal (2012) provides several theoretical results on perfoming inference under various computational constraints, and Chandrasekaran & Jordan (2013) considers a hierarchy of convex relaxations that provide practical tradeoffs between accuracy and computational cost. More recently, Oymak et al. (2015) provides sharp time-data tradeoffs in the context of linear inverse problems, showing the existence of a phase transition between the number of measurements and the convergence rate of the resulting recovery optimization algorithm. Giryes et al. (2016) builds on this result to produce an analysis of LISTA that describes acceleration in conditions where the iterative procedure has linear convergence rate. Finally, Xin et al. (2016) also studies the capabilities of Deep Neural networks at approximating sparse inference. The authors show that unrolled iterations lead to better approximation if one allows the weights to vary at each layer, contrary to standard splitting algorithms. Whereas their focus is on relaxing the convergence hypothesis of iterative thresholding algorithms, we study a complementary question, namely when is speedup possible, without assuming strongly convex optimization. Their results are consistent with ours, since our analysis also shows that learning shared layer weights is less effective.
Inspired by the LISTA architecture, our mathematical analysis reveals that adaptive acceleration is related to a specific matrix factorization of the Gram matrix of the dictionary B = DTD as B = ATSA−R ,where A is unitary, S is diagonal and the residual is positive semidefinite: R 0. Our factorization balances between near diagonalization by asking that ‖R‖ is small and small perturbation of the `1 norm, i.e. ‖Az‖1 − ‖z‖1 is small. When this factorization succeeds, we prove that the resulting splitting algorithm enjoys a convergence rate with improved constants with respect to the non-adaptive version. Moreover, our analysis also shows that acceleration is mostly possible at the beginning of the iterative process, when the current estimate is far from the optimal solution, which is consistent with numerical experiments. We also show that the existence of this factorization is not only sufficient for acceleration, but also necessary. This is shown by constructing dictionaries whose Gram matrix diagonalizes in a basis that is incoherent with the canonical basis, and verifying that LISTA fails in that case to accelerate with respect to ISTA.
In our numerical experiments, we design a specialized version of LISTA called FacNet, with more constrained parameters, which is then used as a tool to show that our theoretical analysis captures the acceleration mechanism of LISTA. Our theoretical results can be applied to FacNet and as LISTA is a generalization of this model, it always performs at least as well, showing that the existence of the factorization is a sufficient certificate for acceleration by
LISTA. Reciprocally, we show that for cases where no acceleration is possible with FacNet, the LISTA model also fail to provide acceleration, linking the two speedup mechanisms. This numerical evidence suggest that the existence of our proposed factorization is sufficient and somewhat necessary for LISTA to show good results.
The rest of the paper is structured as follows. Section 2 presents our mathematical analysis and proves the convergence of the adaptive algorithm as a function of the quality of the matrix factorization. Finally, Section 3 presents the generic architectures that will enable the usage of such schemes and the numerical experiments, which validate our analysis over a range of different scenarios.
2 Accelerating Sparse Coding with Sparse Matrix Factorizations
2.1 Unitary Proximal Splitting
In this section we describe our setup for accelerating sparse coding based on the Proximal Splitting method. Let Ω ⊂ Rn be the set describing our input data, and D ∈ Rn×m be a dictionary, with m > n. We wish to find fast and accurate approximations of the sparse coding z∗(x) of any x ∈ Ω, defined in (1) For simplicity, we denote B = DTD and y = D†x to rewrite (1) as
z∗(x) = arg min z Fx(z) =
1 2 (y − z)TB(y − z)︸ ︷︷ ︸
E(z)
+λ‖z‖1︸ ︷︷ ︸ G(z) . (2)
For clarity, we will refer to Fx as F and to z ∗(x) as z∗. The classic proximal splitting technique finds z∗ as the limit of sequence (zk)k, obtained by successively constructing a surrogate loss Fk(z) of the form
Fk(z) = E(zk) + (zk − y)TB(z − zk) + Lk‖z − zk‖22 + λ‖z‖1 , (3) satisfying Fk(z) ≥ F (z) for all z ∈ Rm . Since Fk is separable in each coordinate of z, zk+1 = arg minz Fk(z) can be computed efficiently. This scheme is based on a majoration of the quadratic form (y − z)TB(y − z) with an isotropic quadratic form Lk‖zk − z‖22. The convergence rate of the splitting algorithm is optimized by choosing Lk as the smallest constant satisfying Fk(z) ≥ F (z), which corresponds to the largest singular value of B. The computation of zk+1 remains separable by replacing the quadratic form LkI by any diagonal form. However, the Gram matrix B = DTD might be poorly approximated via diagonal forms for general dictionaries. Our objective is to accelerate the convergence of this algorithm by finding appropriate factorizations of the matrix B such that
B ≈ ATSA , and ‖Az‖1 ≈ ‖z‖1 , where A is unitary and S is diagonal positive definite. Given a point zk at iteration k, we can rewrite F (z) as
F (z) = E(zk) + (zk − y)TB(z − zk) +QB(z, zk) , (4)
with QB(v, w) := 1
2 (v − w)TB(v − w) + λ‖v‖1 . For any diago-
nal positive definite matrix S and unitary matrix A, the surrogate loss F̃ (z, zk) := E(zk) + (zk − y)TB(z − zk) +QS(Az,Azk) can be explicitly minimized, since
arg min z F̃ (z, zk) = A T arg min u
( (zk − y)TBAT(u−Azk) +QS(u,Azk) ) = AT arg min
u QS
( u,Azk − S−1AB(zk − y) ) (5)
where we use the variable change u = Az. As S is diagonal positive definite, (5) is separable and can be computed easily, using a linear operation followed by a point-wise non linear soft-thresholding. Thus, any couple (A,S) ensures an computationally cheap scheme. The question is then how to factorize B using S and A in an optimal manner, that is, such that the resulting proximal splitting sequence converges as fast as possible to the sparse coding solution.
2.2 Non-asymptotic Analysis
We will now establish convergence results based on the previous factorization. These bounds will inform us on how to best choose the factors Ak and Sk in each iteration.
For that purpose, let us define δA(z) = λ ( ‖Az‖1 − ‖z‖1 ) , and R = ATSA−B . (6)
The quantity δA(z) thus measures how invariant the `1 norm is to the unitary operator A, whereas R corresponds to the residual of approximating the original Gram matrix B by our factorization ATSA . Given a current estimate zk, we can rewrite
F̃ (z, zk) = F (z) + 1
2 (z − zk)TR(z − zk) + δA(z) . (7)
By imposing that R is a positive semidefinite residual one immediately obtains the following bound.
Proposition 2.1. Suppose that R = ATSA−B is positive definite, and define
zk+1 = arg min z F̃ (z, zk) . (8)
Then F (zk+1)− F (z∗) ≤ 1
2 ‖R‖‖zk − z∗‖22+δA(z∗)− δA(zk+1) . (9)
Proof. By definition of zk+1 and using the fact that R 0 we have
F (zk+1)− F (z∗) ≤ F (zk+1)− F̃ (zk+1, zk) + F̃ (z∗, zk)− F (z∗)
= −1 2 (zk+1 − zk)TR(zk+1 − zk)− δA(zk+1) + 1 2 (z∗ − zk)TR(z∗ − zk) + δA(z∗) ≤ 1 2 (z∗ − zk)TR(z∗ − zk) + ( δA(z ∗)− δA(zk+1) ) .
where the first line results from the definition of zk+1 and the third line makes use of R positiveness.
This simple bound reveals that to obtain fast approximations to the sparse coding it is sufficient to find S and A such that ‖R‖ is small and that the `1 commutation term δA is small. These two conditions will be often in tension: one can always obtain R ≡ 0 by using the Singular Value Decomposition of B = AT0S0A0 and setting A = A0 and S = S0. However, the resulting A0 might introduce large commutation error δA0 . Similarly, as the
absolute value is non-expansive, i.e. ∣∣∣|a| − |b|∣∣∣ ≤ ∣∣a− b∣∣, we have that
|δA(z)| = λ ∣∣∣‖Az‖1 − ‖z‖1∣∣∣ ≤ λ‖(A− I)z‖1 (10)
≤ λ √ 2 max(‖Az‖0, ‖z‖0) · ‖A− I‖ · ‖z‖2 ,
where we have used the Cauchy-Schwartz inequality ‖x‖1 ≤ √ ‖x‖0‖x‖2 in the last equation. In particular, (10) shows that unitary matrices in the neighborhood of I with ‖A− I‖ small have small `1 commutation error δA but can be inappropriate to approximate general B matrix.
The commutation error also depends upon the sparsity of z and Az . If both z and Az are sparse then the commutation error is reduced, which can be achieved if A is itself a sparse unitary matrix. Moreover, since
|δA(z)− δA(z′)| ≤ λ|‖z‖1 − ‖z′‖1|+ λ|‖Az‖1 − ‖Az′‖1|
and |‖z‖1 − ‖z′‖1| ≤ ‖z − z′‖1 ≤ √ ‖z − z′‖0‖z − z′‖2
it results that δA is Lipschitz with respect to the Euclidean norm; let us denote by LA(z) its local Lipschitz constant in z, which can be computed using the norm of the subgradient
in z1. An uniform upper bound for this constant is (1+‖A‖1)λ √ m, but it is typically much smaller when z and Az are both sparse. Equation (8) defines an iterative procedure determined by the pairs {(Ak, Sk)}k. The following theorem uses the previous results to compute an upper bound of the resulting sparse coding estimator.
Theorem 2.2. Let Ak, Sk be the pair of unitary and diagonal matrices corresponding to iteration k, chosen such that Rk = A T kSkAk −B 0. It results that
F (zk)− F (z∗) ≤ (z∗ − z0)TR0(z∗ − z0) + 2LA0(z1)‖z∗ − z1‖2 2k + α− β 2k , (11)
with α = k−1∑ i=1 ( 2LAi(zi+1)‖z ∗ − zi+1‖2 + (z∗ − zi)T(Ri−1 −Ri)(z∗ − zi) ) ,
β = k−1∑ i=0 (i+ 1) ( (zi+1 − zi)TRi(zi+1 − zi) + 2δAi(zi+1)− 2δAi(zi) ) ,
where LA(z) denote the local lipschitz constant of δA at z.
Remarks: If one sets Ak = I and Sk = ‖B‖I for all k ≥ 0, (11) corresponds to the bound of the ISTA algorithm (Beck & Teboulle, 2009).
We can specialize the theorem in the case when A0, S0 are chosen to minimize the bound (9) and Ak = I, Sk = ‖B‖I for k ≥ 1. Corollary 2.3. If Ak = I, Sk = ‖B‖I for k ≥ 1 then
F (zk)−F (z∗) ≤ (z∗ − z0)TR0(z∗ − z0) + 2LA0(z1)(‖z∗ − z1‖+ ‖z1 − z0‖) + (z∗ − z1)TR0(z∗ − z1)T
2k .
(12)
This corollary shows that by simply replacing the first step of ISTA by the modified proximal step detailed in (5), one can obtain an improved bound at fixed k as soon as
2‖R0‖max(‖z∗−z0‖22, ‖z∗−z1‖22)+4LA0(z1) max(‖z∗−z0‖2, ‖z∗−z1‖2) ≤ ‖B‖‖z∗−z0‖22 , which, assuming ‖z∗ − z0‖2 ≥ ‖z∗ − z1‖2, translates into
‖R0‖+ 2 LA0(z1) ‖z∗ − z0‖2 ≤ ‖B‖ 2 . (13)
More generally, given a current estimate zk, searching for a factorization (Ak, Sk) will improve the upper bound when
‖Rk‖+ 2 LAk(zk+1) ‖z∗ − zk‖2 ≤ ‖B‖ 2 . (14)
We emphasize that this is not a guarantee of acceleration, since it is based on improving an upper bound. However, it provides a simple picture on the mechanism that makes non-asymptotic acceleration possible.
2.3 Interpretation
In this section we analyze the consequences of Theorem 2.2 in the design of fast sparse coding approximations, and provide a possible explanation for the behavior observed numerically.
2.3.1 ‘Phase Transition” and Law of Diminishing Returns
(14) reveals that the optimum matrix factorization in terms of minimizing the upper bound depends upon the current scale of the problem, that is, of the distance ‖z∗ − zk‖. At the beginning of the optimization, when ‖z∗ − zk‖ is large, the bound (14) makes it easier to explore the space of factorizations (A,S) with A further away from the identity. Indeed, the bound tolerates larger increases in LA(zk+1), which is dominated by
LA(zk+1) ≤ λ( √ ‖zk+1‖0 + √ ‖Azk+1‖0) ,
1 This quantity exists as δA is a difference of convex. See proof of ?? in appendices for precisions.
i.e. the sparsity of both z1 and A0(z1). On the other hand, when we reach intermediate solutions zk such that ‖z∗ − zk‖ is small with respect to LA(zk+1), the upper bound is minimized by choosing factorizations where A is closer and closer to the identity, leading to the non-adaptive regime of standard ISTA (A = Id).
This is consistent with the numerical experiments, which show that the gains provided by learned sparse coding methods are mostly concentrated in the first iterations. Once the estimates reach a certain energy level, section 3 shows that LISTA enters a steady state in which the convergence rate matches that of standard ISTA.
The natural follow-up question is to determine how many layers of adaptive splitting are sufficient before entering the steady regime of convergence. A conservative estimate of this quantity would require an upper bound of ‖z∗− zk‖ from the energy bound F (zk)−F (z∗). Since in general F is convex but not strongly convex, such bound does not exist unless one can assume that F is locally strongly convex (for instance for sufficiently small values of F ).
2.3.2 Improving the factorization to particular input distributions
Given an input dataset D = (xi, z(0)i , z∗i )i≤N , containing examples xi ∈ R n, initial estimates z (0) i and sparse coding solutions z ∗ i , the factorization adapted to D is defined as
min A,S; ATA=I,ATSA−B 0
1
N ∑ i≤N 1 2 (z (0) i − z ∗ i ) T(ATSA−B)(z(0)i − z ∗ i ) + δA(z ∗ i )− δA(z1,i) . (15)
Therefore, adapting the factorization to a particular dataset, as opposed to enforcing it uniformly over a given ball B(z∗;R) (where the radius R ensures that the initial value z0 ∈ B(z∗;R)), will always improve the upper bound (9). Studying the gains resulting from the adaptation to the input distribution will be let for future work.
3 Numerical Experiments
This section provides numerical arguments to analyse adaptive optimization algorithms and their performances, and relates them to the theoretical properties developed in the previous section. All the experiments were run using Python and Tensorflow. For all the experiments, the training is performed using Adagrad (Duchi et al., 2011). The code to reproduce the figures is available online2.
3.1 Adaptive Optimization Networks Architectures
LISTA/LFISTA In Gregor & Le Cun (2010), the authors introduced LISTA, a neural network constructed by considering ISTA as a recurrent neural net. At each step, ISTA performs the following 2-step procedure :
1. uk+1 = zk − 1
L DT(Dzk − x) = (I−
1
L DTD)︸ ︷︷ ︸
Wg
zk + 1
L DT︸ ︷︷ ︸ We x ,
2. zk+1 = h λ L
(uk+1) where hθ(u) = sign(u)(|u| − θ)+ , step k of ISTA (16) 2The code can be found at https://github.com/tomMoral/AdaptiveOptim
This procedure combines a linear operation to compute uk+1 with an element-wise non linearity. It can be summarized as a recurrent neural network, presented in Figure 1a., with tied weights. The autors in Gregor & Le Cun (2010) considered the architecture ΦKΘ with parameters Θ = (W (k) g ,W (k) e , θ(k))k=1,...K obtained by unfolding K times the recurrent network, as presented in Figure 1b. The layers φkΘ are defined as
zk+1 = φ k Θ(zk) := hθ(Wgzk +Wex) . (17)
If W (k) g = I − D TD L , W (k) e = DT L and θ (k) = λL are fixed for all the K layers, the output of this neural net is exactly the vector zK resulting from K steps of ISTA. With LISTA, the parameters Θ are learned using back propagation to minimize the cost function:
f(Θ) = Ex [ Fx(Φ K Θ (x)) ] .
A similar algorithm can be derived from FISTA, the accelerated version of ISTA to obtain LFISTA (see Figure 5 in Appendix A ). The architecture is very similar to LISTA, now with two memory tapes:
zk+1 = hθ(Wgzk +Wmzk−1 +Wex) .
Factorization network Our analysis in Section 2 suggests a refactorization of LISTA in more a structured class of parameters. Following the same basic architecture, and using (5), the network FacNet, ΨKΘ is formed using layers such that:
zk+1 = ψ k Θ(zk) := A ThλS−1(Azk − S−1A(DTDzk −DTx)) , (18) with S diagonal and A unitary, the parameters of the k-th layer. The parameters obtained after training such a network with back-propagation can be used with the theory developed in Section 2. Up to the last linear operation AT of the network, this network is a re-parametrization of LISTA in a more constrained parameter space. Thus, LISTA is a generalization of this proposed network and should have performances at least as good as FacNet, for a fixed number of layers.
The optimization can also be performed using backpropagation. To enforce the unitary constraints on A(k), the cost function is modified with a penalty:
f(Θ) = Ex [ Fx(Ψ K Θ (x)) ] + µ
K K∑ k=1 ∥∥∥∥∥I− (A(k))T A(k) ∥∥∥∥∥ 2
2
, (19)
with Θ = (A(k), S(k))k=1...K the parameters of the K layers and µ a scaling factor for the regularization. The resulting matrix A(k) is then projected on the Stiefel Manifold using a SVD to obtain final parameters, coherent with the network structure.
Linear model Finally, it is important to distinguish the performance gain resulting from choosing a suitable starting point and the acceleration from our model. To highlights the gain obtain by changing the starting point, we considered a linear model with one layer such that zout = A
(0)x. This model is learned using SGD with the convex cost function f(A(0)) = ‖(I−DA(0))x‖22 + λ‖A(0)x‖1 . It computes a tradeoff between starting from the sparsest point 0 and a point with minimal reconstruction error y . Then, we observe the performance of the classical iteration of ISTA using zout as a stating point instead of 0 .
3.2 Synthetic problems with known distributions
Gaussian dictionary In order to disentangle the role of dictionary structure from the role of data distribution structure, the minimization problem is tested using a synthetic generative model with no structure in the weights distribution. First, m atoms di ∈ Rn are drawn iid from a multivariate Gaussian with mean 0 and covariance In and the dictionary
D is defined as ( di/‖di‖2 ) i=1...m . The data points are generated from its sparse codes following a Bernoulli-Gaussian model. The coefficients z = (z1, . . . , zm) are constructed with zi = biai, where bi ∼ B(ρ) and ai ∼ N (0, σIm) , where ρ controls the sparsity of the data. The values are set to m=100, n=64 for the dictionary dimension, ρ = 5/m for the sparsity level and σ=10 for the activation coefficient generation parameters. The sparsity
regularization is set to λ=0.01. The batches used for the training are generated with the model at each step and the cost function is evaluated over a fixed test set, not used in the training.
Figure 2 displays the cost performance for methods ISTA/FISTA/Linear relatively to their iterations and for methods LISTA/LFISTA/FacNet relatively to the number of layers used to solve our generated problem. Linear has performances comparable to learned methods with the first iteration but a gap appears as the number of layers increases, until a point where it achieves the same performances as non adaptive methods. This highlights that the adaptation is possible in the subsequent layers of the networks, going farther than choosing a suitable starting point for iterative methods. The first layers permit to achieve a large gain over the classical optimization strategy, by leveraging the structure of the problem. This appears even with no structure in the sparsity patterns of input data, in accordance with the results in the previous section. We also observe diminishing returns as the number of layers increases. This results from the phase transition described in Subsubsection 2.3.1, as the last layers behave as ISTA steps and do not speed up the convergence. The 3 learned algorithms are always performing at least as well as their classical counterpart, as it was stated in Theorem 2.2. We also explored the effect of the sparsity level in the training and learning of adaptive networks. In the denser setting, the arbitrage between the `1-norm and the squared error is easier as the solution has a lot of non zero coefficients. Thus in this setting, the approximate method is more precise than in the very sparse setting where the approximation must perform a fine selection of the coefficients. But it also yield lower gain at the beggining as the sparser solution can move faster.
There is a small gap between LISTA and FacNet in this setup. This can be explained from the extra constraints on the weights that we impose in the FacNet, which effectively reduce the parameter space by half. Also, we implement the unitary constraints on the matrix A by a soft regularization (see (19)), involving an extra hyper-parameter µ that also contributes to the small performance gap. In any case, these experiments show that our analysis accounts for most of the acceleration provided by LISTA, as the performance of both methods are similar, up to optimization errors.
Adversarial dictionary The results from Section 2 show that problems with a gram matrix composed of large eigenvalues associated to non sparse eigenvectors are harder to accelerate. Indeed, it is not possible in this case to find a quasi diagonalization of the matrix B that
does not distort the `1 norm. It is possible to generate such a dictionary using Harmonic Analysis. The Discrete Fourier Transform (DFT) distorts a lot the `1 ball, since a very sparse vector in the temporal space is transformed in widely spread spectrum in the Fourier domain. We can thus design a dictionary for which LISTA and FacNet performances should
be degraded. D = ( di/‖di‖2 ) i=1...m is constructed such that dj,k = e −2πijζk , with ( ζk ) k≤n
randomly selected from { 1/m, . . . ,m/2/m } without replacement.
The resulting performances are reported in Figure 3. The first layer provides a big gain by changing the starting point of the iterative methods. It realizes an arbitrage of the tradeoff between starting from 0 and starting from y . But the next layers do not yield any extra gain compared to the original ISTA algorithm. After 4 layers, the cost performance of both adaptive methods and ISTA are equivalent. It is clear that in this case, FacNet does not accelerate efficiently the sparse coding, in accordance with our result from Section 2. LISTA also displays poor performances in this setting. This provides further evidence that FacNet and LISTA share the same acceleration mechanism as adversarial dictionaries for FacNet are also adversarial for LISTA.
3.3 Sparse coding with over complete dictionary on images
Wavelet encoding for natural images A highly structured dictionary composed of translation invariant Haar wavelets is used to encode 8x8 patches of images from the PASCAL VOC 2008 dataset. The network is used to learn an efficient sparse coder for natural images over this family. 500 images are sampled from dataset to train the encoder. Training batches are obtained by uniformly sampling patches from the training image set to feed the stochastic optimization of the network. The encoder is then tested with 10000 patches sampled from 100 new images from the same dataset.
Learned dictionary for MNIST To evaluate the performance of LISTA for dictionary learning, LISTA was used to encode MNIST images over an unconstrained dictionary, learned a priori using classical dictionary learning techniques. The dictionary of 100 atoms was learned from 10000 MNIST images in grayscale rescaled to 17x17 using the implementation of Mairal et al. (2009) proposed in scikit-learn, with λ = 0.05. Then, the networks were trained through backpropagation using all the 60000 images from the training set of MNIST. Finally, the perfornance of these encoders were evaluated with the 10000 images of the training set of MNIST.
The Figure 4 displays the cost performance of the adaptive procedures compared to nonadaptive algorithms. In both scenario, FacNet has performances comparable to the one of LISTA and their behavior are in accordance with the theory developed in Section 2. The gains become smaller for each added layer and the initial gain is achieved for dictionary either structured or unstructured. The MNIST case presents a much larger gain compare to the experiment with natural images. This results from the difference of structure of the input distribution, as the MNIST digits are much more constrained than patches from natural images and the network is able to leverage it to find a better encoder. In the MNIST case, a network composed of 12 layers is sufficient to achieve performance comparable to ISTA with more than 1000 iterations.
4 Conclusions
In this paper we studied the problem of finite computational budget approximation of sparse coding. Inspired by the ability of neural networks to accelerate over splitting methods on the first few iterations, we have studied which properties of the dictionary matrix and the data distribution lead to such acceleration. Our analysis reveals that one can obtain acceleration by finding approximate matrix factorizations of the dictionary which nearly diagonalize its Gram matrix, but whose orthogonal transformations leave approximately invariant the `1 ball. By appropriately balancing these two conditions, we show that the resulting rotated proximal splitting scheme has an upper bound which improves over the ISTA upper bound under appropriate sparsity.
In order to relate this specific factorization property to the actual LISTA algorithm, we have introduced a reparametrization of the neural network that specifically computes the factorization, and incidentally provides reduced learning complexity (less parameters) from the original LISTA. Numerical experiments of Section 3 show that such reparametrization recovers the same gains as the original neural network, providing evidence that our theoretical analysis is partially explaining the behavior of the LISTA neural network. Our acceleration scheme is inherently transient, in the sense that once the iterates are sufficiently close to the optimum, the factorization is not effective anymore. This transient effect is also consistent with the performance observed numerically, although the possibility remains open to find alternative models that further exploit the particular structure of the sparse coding. Finally, we provide evidence that successful matrix factorization is not only sufficient but also necessary for acceleration, by showing that Fourier dictionaries are not accelerated.
Despite these initial results, a lot remains to be understood on the general question of optimal tradeoffs between computational budget and statistical accuracy. Our analysis so far did not take into account any probabilistic consideration (e.g. obtain approximations that hold with high probability or in expectation). Another area of further study is the extension of our analysis to the FISTA case, and more generally to other inference tasks that are currently solved via iterative procedures compatible with neural network parametrizations, such as inference in Graphical Models using Belief Propagation or other ill-posed inverse problems.
A Learned Fista
A similar algorithm can be derived from FISTA, the accelerated version of ISTA to obtain LFISTA (see Figure 5 ). The architecture is very similar to LISTA, now with two memory taps: It introduces a momentum term to improve the convergence rate of ISTA as follows:
1. yk = zk + tk−1 − 1
tk (zk − zk−1) ,
2. zk+1 = h λ L
( yk − 1
L ∇E(yk)
) = h λ
L
( (I− 1
L B)yk +
1 L DTx
) ,
3. tk+1 = 1 + √ 1 + 4t2k 2 .
By substituting the expression for yk into the first equation, we obtain a generic recurrent architecture very similar to LISTA, now with two memory taps, that we denote by LFISTA:
zk+1 = hθ(W (k) g zk +W (k) m zk−1 +W (k) e x) .
This model is equivalent to running K-steps of FISTA when its parameters are initialized with
W (k)g =
( 1 +
tk−1 − 1 tk
)( I− 1
L B
) ,
W (k)m =
( 1− tk−1
tk
)( I− 1
L B
) ,
W (k)e = 1
L DT .
The parameters of this new architecture, presented in Figure 5 , are trained analogously as in the LISTA case.
B Proofs
Lemma B.1. Suppose that R = ATSA−B is positive definite, and define
zk+1 = arg min z F̃ (z, zk) , and (20)
δA(z) = ‖Az‖1 − ‖z‖1. Then we have
F (zk+1)−F (z∗) ≤ 1
2
( (z∗ − zk)TR(z∗ − zk)− (z∗ − zk+1)TR(z∗ − zk+1) ) +〈∂δA(zk+1), zk+1−z∗〉 .
(21)
Proof. We define
f(t) = F ( tzk+1 + (1− t)z∗ ) , t ∈ [0, 1] .
Since F is convex, f is also convex in [0, 1]. Since f(0) = F (z∗) is the global minimum, it results that f ′(t) is increasing in (0, 1], and hence
F (zk+1)− F (z∗) = f(1)− f(0) = ∫ f ′(t)dt ≤ f ′(1) ,
where f ′(1) is any element of ∂f(1). Since δA(z) is a difference of convex functions, its subgradient can be defined as a limit of infimal convolutions Hiriart-Urruty (1991). We have ∂f(1) = 〈∂F (zk+1), zk+1 − z∗〉 , and since ∂F (z) = ∂F̃ (z, zk)−R(z − zk)− ∂δA(z) and 0 ∈ ∂F̃ (zk+1, zk) it results that ∂F (zk+1) = −R(zk+1 − zk)− ∂δA(zk+1) , and thus
F (zk+1)− F (z∗) ≤ (z∗ − zk+1)TR(zk+1 − zk) + 〈∂δA(zk+1), (z∗ − zk+1)〉 . (22) (21) is obtained by observing that
(z∗ − zk+1)TR(zk+1 − zk) ≤ 1
2
( (z∗ − zk)TR(z∗ − zk)− (z∗ − zk+1)TR(z∗ − zk+1) ) , (23)
thanks to the fact that R 0.
Theorem B.2. Let Ak, Sk be the pair of unitary and diagonal matrices corresponding to iteration k, chosen such that Rk = A T kSkAk −B 0. It results that
F (zk)− F (z∗) ≤ (z∗ − z0)TR0(z∗ − z0) + 2〈∇δA0(z1), (z∗ − z1)〉 2k + α− β 2k , with (24)
α = k−1∑ n=1 ( 2〈∇δAn(zn+1), (z∗ − zn+1)〉+ (z∗ − zn)T(Rn−1 −Rn)(z∗ − zn) ) ,
β = k−1∑ n=0 (n+ 1) ( (zn+1 − zn)TRn(zn+1 − zn) + 2δAn(zn+1)− 2δAn(zn) ) .
Proof: The proof is adapted from (Beck & Teboulle, 2009), Theorem 3.1. From Lemma B.1, we start by using (21) to bound terms of the form F (zn)− F (z∗):
F (zn)−F (z∗) ≤ 〈∇δAn(zn+1), (z ∗−zn+1)〉+
1
2
( (z∗ − zn)TRn(z∗ − zn)− (z∗ − zn+1)TRn(z∗ − zn+1) ) .
Adding these inequalities for n = 0 . . . k − 1 we obtaink−1∑ n=0 F (zn) − kF (z∗) ≤ k−1∑ n=0 〈∇δAn(zn+1), (z∗ − zn+1)〉+ (25)
+ 1
2
( (z∗ − z0)TR0(z∗ − z0)− (z∗ − zk)TRk−1(z∗ − zk) ) +
+ 1
2 k−1∑ n=1 (z∗ − zn)T(Rn−1 −Rn)(z∗ − zn) .
On the other hand, we also have
F (zn)− F (zn+1) ≥ F (zn)− F̃ (zn, zn) + F̃ (zn+1, zn)− F (zn+1)
= −δAn(zn) + δAn(zn+1) + 1
2 (zn+1 − zn)TRn(zn+1 − zn) ,
which results in k−1∑ n=0 (n+ 1)(F (zn)− F (zn+1)) ≥ 1 2 k−1∑ n=0 (n+ 1)(zn+1 − zn)TRn(zn+1 − zn) + (26)
+ k−1∑ n=0 (n+ 1) ( δAn(zn+1)− δAn(zn) ) k−1∑ n=0 F (zn) − kF (zk) ≥ k−1∑ n=0 (n+ 1) ( 1 2 (zn+1 − zn)TRn(zn+1 − zn) + δAn(zn+1)− δAn(zn) ) .
Combining (25) and (26) we obtain
F (zk)− F (z∗) ≤ (z∗ − z0)TR0(z∗ − z0) + 2〈∇δA0(z1), (z∗ − z1)〉 2k + α− β 2k (27)
with
α = k−1∑ n=1 ( 2〈∇δAn(zn+1), (z∗ − zn+1)〉+ (z∗ − zn)T(Rn−1 −Rn)(z∗ − zn) ) ,
β = k−1∑ n=0 (n+ 1) ( (zn+1 − zn)TRn(zn+1 − zn) + 2δAn(zn+1)− 2δAn(zn) ) .
Corollary B.3. If Ak = I, Sk = ‖B‖I for k > 0 then F (zk)−F (z∗) ≤ (z∗ − z0)TR0(z∗ − z0) + 2LA0(z1)(‖z∗ − z1‖+ ‖z1 − z0‖) + (z∗ − z1)TR0(z∗ − z1)T
2k .
(28)
Proof: We verify that in that case, Rn−1−Rn ≡ 0 and for n > 1 and δAn ≡ 0 for n > 0 . | 1. What is the focus of the paper regarding theoretical analysis and neural networks?
2. What are the strengths of the proposed approach in terms of acceleration and factorization?
3. Do you have any concerns or questions regarding the paper's content, such as unclear purposes or undefined variables? | Review | Review
This paper performs theoretical analysis to understand how sparse coding could be accelerated by neural networks. The neural networks are generated by unfolding the ISTA/FISTA iterations. Based on the results, the authors proposed a reparametrization approach for the neural network architecture to enforce the factorization property and recovered the original gain of LISTA, which justified the theoretical analysis. My comments are listed below.
It is not clear about the purpose of Section 2.3.2. Adapting the factorization to the input distribution based on (15) would be time consuming because the overhead of solving (15) may not save the total time. In fact, the approach does not use (15) but back propagation to learn the factorization parameters.
Minor comments:
- E(z_k) in (3) and (4) are not defined.
- E_x in (19) is not defined.
- Forward referencing (“Equation (20) defines…”) in the paragraph above Theorem 2.2. needs to be corrected. |
ICLR | Title
Understanding Trainable Sparse Coding with Matrix Factorization
Abstract
Sparse coding is a core building block in many data analysis and machine learning pipelines. Typically it is solved by relying on generic optimization techniques, such as the Iterative Soft Thresholding Algorithm and its accelerated version (ISTA, FISTA). These methods are optimal in the class of first-order methods for non-smooth, convex functions. However, they do not exploit the particular structure of the problem at hand nor the input data distribution. An acceleration using neural networks, coined LISTA, was proposed in Gregor & Le Cun (2010), which showed empirically that one could achieve high quality estimates with few iterations by modifying the parameters of the proximal splitting appropriately. In this paper we study the reasons for such acceleration. Our mathematical analysis reveals that it is related to a specific matrix factorization of the Gram kernel of the dictionary, which attempts to nearly diagonalise the kernel with a basis that produces a small perturbation of the `1 ball. When this factorization succeeds, we prove that the resulting splitting algorithm enjoys an improved convergence bound with respect to the non-adaptive version. Moreover, our analysis also shows that conditions for acceleration occur mostly at the beginning of the iterative process, consistent with numerical experiments. We further validate our analysis by showing that on dictionaries where this factorization does not exist, adaptive acceleration fails.
1 Introduction
Feature selection is a crucial point in high dimensional data analysis. Different techniques have been developed to tackle this problem efficiently, and amongst them sparsity has emerged as a leading paradigm. In statistics, the LASSO estimator (Tibshirani, 1996) provides a reliable way to select features and has been extensively studied in the last two decades (Hastie et al. (2015) and references therein). In machine learning and signal processing, sparse coding has made its way into several modern architectures, including large scale computer vision (Coates & Ng, 2011) and biologically inspired models (Cadieu & Olshausen, 2012). Also, Dictionary learning is a generic unsupervised learning method to perform nonlinear dimensionality reduction with efficient computational complexity (Mairal et al., 2009). All these techniques heavily rely on the resolution of `1-regularized least squares.
The `1-sparse coding problem is defined as solving, for a given input x ∈ Rn and dictionary D ∈ Rn×m, the following problem:
z∗(x) = arg min z Fx(z) ∆ =
1 2 ‖x−Dz‖2 + λ‖z‖1 . (1)
This problem is convex and can therefore be solved using convex optimization machinery. Proximal splitting methods (Beck & Teboulle, 2009) alternate between the minimization of the smooth and differentiable part using the gradient information and the minimization of the non-differentiable part using a proximal operator (Combettes & Bauschke, 2011). These methods can also be accelerated by considering a momentum term, as it is done in FISTA
∗Work done while appointed at UC Berkeley, Statistics Department (currently on leave)
(Beck & Teboulle, 2009; Nesterov, 2005). Coordinate descent (Friedman et al., 2007; Osher & Li, 2009) leverages the closed formula that can be derived for optimizing the problem (1) for one coordinate zi given that all the other are fixed. At each step of the algorithm, one coordinate is updated to its optimal value, which yields an inexpensive scheme to perform each step. The choice of the coordinate to update at each step is critical for the performance of the optimization procedure. Least Angle Regression (LARS) (Hesterberg et al., 2008) is another method that computes the whole LASSO regularization path. These algorithms all provide an optimization procedure that leverages the local properties of the cost function iteratively. They can be shown to be optimal among the class of first-order methods for generic convex, non-smooth functions (Bubeck, 2014).
But all these results are given in the worst case and do not use the distribution of the considered problem. One can thus wonder whether a more efficient algorithm to solve (1) exists for a fixed dictionary D and generic input x drawn from a certain input data distribution. In Gregor & Le Cun (2010), the authors introduced LISTA, a trained version of ISTA that adapts the parameters of the proximal splitting algorithm to approximate the solution of the LASSO using a finite number of steps. This method exploits the common structure of the problem to learn a better transform than the generic ISTA step. As ISTA is composed of a succession of linear operations and piecewise non linearities, the authors use the neural network framework and the backpropagation to derive an efficient procedure solving the LASSO problem. In Sprechmann et al. (2012), the authors extended LISTA to more generic sparse coding scenarios and showed that adaptive acceleration is possible under general input distributions and sparsity conditions.
In this paper, we are interested in the following question: Given a finite computational budget, what is the optimum estimator of the sparse coding? This question belongs to the general topic of computational tradeoffs in statistical inference. Randomized sketches (Alaoui & Mahoney, 2015; Yang et al., 2015) reduce the size of convex problems by projecting expensive kernel operators into random subspaces, and reveal a tradeoff between computational efficiency and statistical accuracy. Agarwal (2012) provides several theoretical results on perfoming inference under various computational constraints, and Chandrasekaran & Jordan (2013) considers a hierarchy of convex relaxations that provide practical tradeoffs between accuracy and computational cost. More recently, Oymak et al. (2015) provides sharp time-data tradeoffs in the context of linear inverse problems, showing the existence of a phase transition between the number of measurements and the convergence rate of the resulting recovery optimization algorithm. Giryes et al. (2016) builds on this result to produce an analysis of LISTA that describes acceleration in conditions where the iterative procedure has linear convergence rate. Finally, Xin et al. (2016) also studies the capabilities of Deep Neural networks at approximating sparse inference. The authors show that unrolled iterations lead to better approximation if one allows the weights to vary at each layer, contrary to standard splitting algorithms. Whereas their focus is on relaxing the convergence hypothesis of iterative thresholding algorithms, we study a complementary question, namely when is speedup possible, without assuming strongly convex optimization. Their results are consistent with ours, since our analysis also shows that learning shared layer weights is less effective.
Inspired by the LISTA architecture, our mathematical analysis reveals that adaptive acceleration is related to a specific matrix factorization of the Gram matrix of the dictionary B = DTD as B = ATSA−R ,where A is unitary, S is diagonal and the residual is positive semidefinite: R 0. Our factorization balances between near diagonalization by asking that ‖R‖ is small and small perturbation of the `1 norm, i.e. ‖Az‖1 − ‖z‖1 is small. When this factorization succeeds, we prove that the resulting splitting algorithm enjoys a convergence rate with improved constants with respect to the non-adaptive version. Moreover, our analysis also shows that acceleration is mostly possible at the beginning of the iterative process, when the current estimate is far from the optimal solution, which is consistent with numerical experiments. We also show that the existence of this factorization is not only sufficient for acceleration, but also necessary. This is shown by constructing dictionaries whose Gram matrix diagonalizes in a basis that is incoherent with the canonical basis, and verifying that LISTA fails in that case to accelerate with respect to ISTA.
In our numerical experiments, we design a specialized version of LISTA called FacNet, with more constrained parameters, which is then used as a tool to show that our theoretical analysis captures the acceleration mechanism of LISTA. Our theoretical results can be applied to FacNet and as LISTA is a generalization of this model, it always performs at least as well, showing that the existence of the factorization is a sufficient certificate for acceleration by
LISTA. Reciprocally, we show that for cases where no acceleration is possible with FacNet, the LISTA model also fail to provide acceleration, linking the two speedup mechanisms. This numerical evidence suggest that the existence of our proposed factorization is sufficient and somewhat necessary for LISTA to show good results.
The rest of the paper is structured as follows. Section 2 presents our mathematical analysis and proves the convergence of the adaptive algorithm as a function of the quality of the matrix factorization. Finally, Section 3 presents the generic architectures that will enable the usage of such schemes and the numerical experiments, which validate our analysis over a range of different scenarios.
2 Accelerating Sparse Coding with Sparse Matrix Factorizations
2.1 Unitary Proximal Splitting
In this section we describe our setup for accelerating sparse coding based on the Proximal Splitting method. Let Ω ⊂ Rn be the set describing our input data, and D ∈ Rn×m be a dictionary, with m > n. We wish to find fast and accurate approximations of the sparse coding z∗(x) of any x ∈ Ω, defined in (1) For simplicity, we denote B = DTD and y = D†x to rewrite (1) as
z∗(x) = arg min z Fx(z) =
1 2 (y − z)TB(y − z)︸ ︷︷ ︸
E(z)
+λ‖z‖1︸ ︷︷ ︸ G(z) . (2)
For clarity, we will refer to Fx as F and to z ∗(x) as z∗. The classic proximal splitting technique finds z∗ as the limit of sequence (zk)k, obtained by successively constructing a surrogate loss Fk(z) of the form
Fk(z) = E(zk) + (zk − y)TB(z − zk) + Lk‖z − zk‖22 + λ‖z‖1 , (3) satisfying Fk(z) ≥ F (z) for all z ∈ Rm . Since Fk is separable in each coordinate of z, zk+1 = arg minz Fk(z) can be computed efficiently. This scheme is based on a majoration of the quadratic form (y − z)TB(y − z) with an isotropic quadratic form Lk‖zk − z‖22. The convergence rate of the splitting algorithm is optimized by choosing Lk as the smallest constant satisfying Fk(z) ≥ F (z), which corresponds to the largest singular value of B. The computation of zk+1 remains separable by replacing the quadratic form LkI by any diagonal form. However, the Gram matrix B = DTD might be poorly approximated via diagonal forms for general dictionaries. Our objective is to accelerate the convergence of this algorithm by finding appropriate factorizations of the matrix B such that
B ≈ ATSA , and ‖Az‖1 ≈ ‖z‖1 , where A is unitary and S is diagonal positive definite. Given a point zk at iteration k, we can rewrite F (z) as
F (z) = E(zk) + (zk − y)TB(z − zk) +QB(z, zk) , (4)
with QB(v, w) := 1
2 (v − w)TB(v − w) + λ‖v‖1 . For any diago-
nal positive definite matrix S and unitary matrix A, the surrogate loss F̃ (z, zk) := E(zk) + (zk − y)TB(z − zk) +QS(Az,Azk) can be explicitly minimized, since
arg min z F̃ (z, zk) = A T arg min u
( (zk − y)TBAT(u−Azk) +QS(u,Azk) ) = AT arg min
u QS
( u,Azk − S−1AB(zk − y) ) (5)
where we use the variable change u = Az. As S is diagonal positive definite, (5) is separable and can be computed easily, using a linear operation followed by a point-wise non linear soft-thresholding. Thus, any couple (A,S) ensures an computationally cheap scheme. The question is then how to factorize B using S and A in an optimal manner, that is, such that the resulting proximal splitting sequence converges as fast as possible to the sparse coding solution.
2.2 Non-asymptotic Analysis
We will now establish convergence results based on the previous factorization. These bounds will inform us on how to best choose the factors Ak and Sk in each iteration.
For that purpose, let us define δA(z) = λ ( ‖Az‖1 − ‖z‖1 ) , and R = ATSA−B . (6)
The quantity δA(z) thus measures how invariant the `1 norm is to the unitary operator A, whereas R corresponds to the residual of approximating the original Gram matrix B by our factorization ATSA . Given a current estimate zk, we can rewrite
F̃ (z, zk) = F (z) + 1
2 (z − zk)TR(z − zk) + δA(z) . (7)
By imposing that R is a positive semidefinite residual one immediately obtains the following bound.
Proposition 2.1. Suppose that R = ATSA−B is positive definite, and define
zk+1 = arg min z F̃ (z, zk) . (8)
Then F (zk+1)− F (z∗) ≤ 1
2 ‖R‖‖zk − z∗‖22+δA(z∗)− δA(zk+1) . (9)
Proof. By definition of zk+1 and using the fact that R 0 we have
F (zk+1)− F (z∗) ≤ F (zk+1)− F̃ (zk+1, zk) + F̃ (z∗, zk)− F (z∗)
= −1 2 (zk+1 − zk)TR(zk+1 − zk)− δA(zk+1) + 1 2 (z∗ − zk)TR(z∗ − zk) + δA(z∗) ≤ 1 2 (z∗ − zk)TR(z∗ − zk) + ( δA(z ∗)− δA(zk+1) ) .
where the first line results from the definition of zk+1 and the third line makes use of R positiveness.
This simple bound reveals that to obtain fast approximations to the sparse coding it is sufficient to find S and A such that ‖R‖ is small and that the `1 commutation term δA is small. These two conditions will be often in tension: one can always obtain R ≡ 0 by using the Singular Value Decomposition of B = AT0S0A0 and setting A = A0 and S = S0. However, the resulting A0 might introduce large commutation error δA0 . Similarly, as the
absolute value is non-expansive, i.e. ∣∣∣|a| − |b|∣∣∣ ≤ ∣∣a− b∣∣, we have that
|δA(z)| = λ ∣∣∣‖Az‖1 − ‖z‖1∣∣∣ ≤ λ‖(A− I)z‖1 (10)
≤ λ √ 2 max(‖Az‖0, ‖z‖0) · ‖A− I‖ · ‖z‖2 ,
where we have used the Cauchy-Schwartz inequality ‖x‖1 ≤ √ ‖x‖0‖x‖2 in the last equation. In particular, (10) shows that unitary matrices in the neighborhood of I with ‖A− I‖ small have small `1 commutation error δA but can be inappropriate to approximate general B matrix.
The commutation error also depends upon the sparsity of z and Az . If both z and Az are sparse then the commutation error is reduced, which can be achieved if A is itself a sparse unitary matrix. Moreover, since
|δA(z)− δA(z′)| ≤ λ|‖z‖1 − ‖z′‖1|+ λ|‖Az‖1 − ‖Az′‖1|
and |‖z‖1 − ‖z′‖1| ≤ ‖z − z′‖1 ≤ √ ‖z − z′‖0‖z − z′‖2
it results that δA is Lipschitz with respect to the Euclidean norm; let us denote by LA(z) its local Lipschitz constant in z, which can be computed using the norm of the subgradient
in z1. An uniform upper bound for this constant is (1+‖A‖1)λ √ m, but it is typically much smaller when z and Az are both sparse. Equation (8) defines an iterative procedure determined by the pairs {(Ak, Sk)}k. The following theorem uses the previous results to compute an upper bound of the resulting sparse coding estimator.
Theorem 2.2. Let Ak, Sk be the pair of unitary and diagonal matrices corresponding to iteration k, chosen such that Rk = A T kSkAk −B 0. It results that
F (zk)− F (z∗) ≤ (z∗ − z0)TR0(z∗ − z0) + 2LA0(z1)‖z∗ − z1‖2 2k + α− β 2k , (11)
with α = k−1∑ i=1 ( 2LAi(zi+1)‖z ∗ − zi+1‖2 + (z∗ − zi)T(Ri−1 −Ri)(z∗ − zi) ) ,
β = k−1∑ i=0 (i+ 1) ( (zi+1 − zi)TRi(zi+1 − zi) + 2δAi(zi+1)− 2δAi(zi) ) ,
where LA(z) denote the local lipschitz constant of δA at z.
Remarks: If one sets Ak = I and Sk = ‖B‖I for all k ≥ 0, (11) corresponds to the bound of the ISTA algorithm (Beck & Teboulle, 2009).
We can specialize the theorem in the case when A0, S0 are chosen to minimize the bound (9) and Ak = I, Sk = ‖B‖I for k ≥ 1. Corollary 2.3. If Ak = I, Sk = ‖B‖I for k ≥ 1 then
F (zk)−F (z∗) ≤ (z∗ − z0)TR0(z∗ − z0) + 2LA0(z1)(‖z∗ − z1‖+ ‖z1 − z0‖) + (z∗ − z1)TR0(z∗ − z1)T
2k .
(12)
This corollary shows that by simply replacing the first step of ISTA by the modified proximal step detailed in (5), one can obtain an improved bound at fixed k as soon as
2‖R0‖max(‖z∗−z0‖22, ‖z∗−z1‖22)+4LA0(z1) max(‖z∗−z0‖2, ‖z∗−z1‖2) ≤ ‖B‖‖z∗−z0‖22 , which, assuming ‖z∗ − z0‖2 ≥ ‖z∗ − z1‖2, translates into
‖R0‖+ 2 LA0(z1) ‖z∗ − z0‖2 ≤ ‖B‖ 2 . (13)
More generally, given a current estimate zk, searching for a factorization (Ak, Sk) will improve the upper bound when
‖Rk‖+ 2 LAk(zk+1) ‖z∗ − zk‖2 ≤ ‖B‖ 2 . (14)
We emphasize that this is not a guarantee of acceleration, since it is based on improving an upper bound. However, it provides a simple picture on the mechanism that makes non-asymptotic acceleration possible.
2.3 Interpretation
In this section we analyze the consequences of Theorem 2.2 in the design of fast sparse coding approximations, and provide a possible explanation for the behavior observed numerically.
2.3.1 ‘Phase Transition” and Law of Diminishing Returns
(14) reveals that the optimum matrix factorization in terms of minimizing the upper bound depends upon the current scale of the problem, that is, of the distance ‖z∗ − zk‖. At the beginning of the optimization, when ‖z∗ − zk‖ is large, the bound (14) makes it easier to explore the space of factorizations (A,S) with A further away from the identity. Indeed, the bound tolerates larger increases in LA(zk+1), which is dominated by
LA(zk+1) ≤ λ( √ ‖zk+1‖0 + √ ‖Azk+1‖0) ,
1 This quantity exists as δA is a difference of convex. See proof of ?? in appendices for precisions.
i.e. the sparsity of both z1 and A0(z1). On the other hand, when we reach intermediate solutions zk such that ‖z∗ − zk‖ is small with respect to LA(zk+1), the upper bound is minimized by choosing factorizations where A is closer and closer to the identity, leading to the non-adaptive regime of standard ISTA (A = Id).
This is consistent with the numerical experiments, which show that the gains provided by learned sparse coding methods are mostly concentrated in the first iterations. Once the estimates reach a certain energy level, section 3 shows that LISTA enters a steady state in which the convergence rate matches that of standard ISTA.
The natural follow-up question is to determine how many layers of adaptive splitting are sufficient before entering the steady regime of convergence. A conservative estimate of this quantity would require an upper bound of ‖z∗− zk‖ from the energy bound F (zk)−F (z∗). Since in general F is convex but not strongly convex, such bound does not exist unless one can assume that F is locally strongly convex (for instance for sufficiently small values of F ).
2.3.2 Improving the factorization to particular input distributions
Given an input dataset D = (xi, z(0)i , z∗i )i≤N , containing examples xi ∈ R n, initial estimates z (0) i and sparse coding solutions z ∗ i , the factorization adapted to D is defined as
min A,S; ATA=I,ATSA−B 0
1
N ∑ i≤N 1 2 (z (0) i − z ∗ i ) T(ATSA−B)(z(0)i − z ∗ i ) + δA(z ∗ i )− δA(z1,i) . (15)
Therefore, adapting the factorization to a particular dataset, as opposed to enforcing it uniformly over a given ball B(z∗;R) (where the radius R ensures that the initial value z0 ∈ B(z∗;R)), will always improve the upper bound (9). Studying the gains resulting from the adaptation to the input distribution will be let for future work.
3 Numerical Experiments
This section provides numerical arguments to analyse adaptive optimization algorithms and their performances, and relates them to the theoretical properties developed in the previous section. All the experiments were run using Python and Tensorflow. For all the experiments, the training is performed using Adagrad (Duchi et al., 2011). The code to reproduce the figures is available online2.
3.1 Adaptive Optimization Networks Architectures
LISTA/LFISTA In Gregor & Le Cun (2010), the authors introduced LISTA, a neural network constructed by considering ISTA as a recurrent neural net. At each step, ISTA performs the following 2-step procedure :
1. uk+1 = zk − 1
L DT(Dzk − x) = (I−
1
L DTD)︸ ︷︷ ︸
Wg
zk + 1
L DT︸ ︷︷ ︸ We x ,
2. zk+1 = h λ L
(uk+1) where hθ(u) = sign(u)(|u| − θ)+ , step k of ISTA (16) 2The code can be found at https://github.com/tomMoral/AdaptiveOptim
This procedure combines a linear operation to compute uk+1 with an element-wise non linearity. It can be summarized as a recurrent neural network, presented in Figure 1a., with tied weights. The autors in Gregor & Le Cun (2010) considered the architecture ΦKΘ with parameters Θ = (W (k) g ,W (k) e , θ(k))k=1,...K obtained by unfolding K times the recurrent network, as presented in Figure 1b. The layers φkΘ are defined as
zk+1 = φ k Θ(zk) := hθ(Wgzk +Wex) . (17)
If W (k) g = I − D TD L , W (k) e = DT L and θ (k) = λL are fixed for all the K layers, the output of this neural net is exactly the vector zK resulting from K steps of ISTA. With LISTA, the parameters Θ are learned using back propagation to minimize the cost function:
f(Θ) = Ex [ Fx(Φ K Θ (x)) ] .
A similar algorithm can be derived from FISTA, the accelerated version of ISTA to obtain LFISTA (see Figure 5 in Appendix A ). The architecture is very similar to LISTA, now with two memory tapes:
zk+1 = hθ(Wgzk +Wmzk−1 +Wex) .
Factorization network Our analysis in Section 2 suggests a refactorization of LISTA in more a structured class of parameters. Following the same basic architecture, and using (5), the network FacNet, ΨKΘ is formed using layers such that:
zk+1 = ψ k Θ(zk) := A ThλS−1(Azk − S−1A(DTDzk −DTx)) , (18) with S diagonal and A unitary, the parameters of the k-th layer. The parameters obtained after training such a network with back-propagation can be used with the theory developed in Section 2. Up to the last linear operation AT of the network, this network is a re-parametrization of LISTA in a more constrained parameter space. Thus, LISTA is a generalization of this proposed network and should have performances at least as good as FacNet, for a fixed number of layers.
The optimization can also be performed using backpropagation. To enforce the unitary constraints on A(k), the cost function is modified with a penalty:
f(Θ) = Ex [ Fx(Ψ K Θ (x)) ] + µ
K K∑ k=1 ∥∥∥∥∥I− (A(k))T A(k) ∥∥∥∥∥ 2
2
, (19)
with Θ = (A(k), S(k))k=1...K the parameters of the K layers and µ a scaling factor for the regularization. The resulting matrix A(k) is then projected on the Stiefel Manifold using a SVD to obtain final parameters, coherent with the network structure.
Linear model Finally, it is important to distinguish the performance gain resulting from choosing a suitable starting point and the acceleration from our model. To highlights the gain obtain by changing the starting point, we considered a linear model with one layer such that zout = A
(0)x. This model is learned using SGD with the convex cost function f(A(0)) = ‖(I−DA(0))x‖22 + λ‖A(0)x‖1 . It computes a tradeoff between starting from the sparsest point 0 and a point with minimal reconstruction error y . Then, we observe the performance of the classical iteration of ISTA using zout as a stating point instead of 0 .
3.2 Synthetic problems with known distributions
Gaussian dictionary In order to disentangle the role of dictionary structure from the role of data distribution structure, the minimization problem is tested using a synthetic generative model with no structure in the weights distribution. First, m atoms di ∈ Rn are drawn iid from a multivariate Gaussian with mean 0 and covariance In and the dictionary
D is defined as ( di/‖di‖2 ) i=1...m . The data points are generated from its sparse codes following a Bernoulli-Gaussian model. The coefficients z = (z1, . . . , zm) are constructed with zi = biai, where bi ∼ B(ρ) and ai ∼ N (0, σIm) , where ρ controls the sparsity of the data. The values are set to m=100, n=64 for the dictionary dimension, ρ = 5/m for the sparsity level and σ=10 for the activation coefficient generation parameters. The sparsity
regularization is set to λ=0.01. The batches used for the training are generated with the model at each step and the cost function is evaluated over a fixed test set, not used in the training.
Figure 2 displays the cost performance for methods ISTA/FISTA/Linear relatively to their iterations and for methods LISTA/LFISTA/FacNet relatively to the number of layers used to solve our generated problem. Linear has performances comparable to learned methods with the first iteration but a gap appears as the number of layers increases, until a point where it achieves the same performances as non adaptive methods. This highlights that the adaptation is possible in the subsequent layers of the networks, going farther than choosing a suitable starting point for iterative methods. The first layers permit to achieve a large gain over the classical optimization strategy, by leveraging the structure of the problem. This appears even with no structure in the sparsity patterns of input data, in accordance with the results in the previous section. We also observe diminishing returns as the number of layers increases. This results from the phase transition described in Subsubsection 2.3.1, as the last layers behave as ISTA steps and do not speed up the convergence. The 3 learned algorithms are always performing at least as well as their classical counterpart, as it was stated in Theorem 2.2. We also explored the effect of the sparsity level in the training and learning of adaptive networks. In the denser setting, the arbitrage between the `1-norm and the squared error is easier as the solution has a lot of non zero coefficients. Thus in this setting, the approximate method is more precise than in the very sparse setting where the approximation must perform a fine selection of the coefficients. But it also yield lower gain at the beggining as the sparser solution can move faster.
There is a small gap between LISTA and FacNet in this setup. This can be explained from the extra constraints on the weights that we impose in the FacNet, which effectively reduce the parameter space by half. Also, we implement the unitary constraints on the matrix A by a soft regularization (see (19)), involving an extra hyper-parameter µ that also contributes to the small performance gap. In any case, these experiments show that our analysis accounts for most of the acceleration provided by LISTA, as the performance of both methods are similar, up to optimization errors.
Adversarial dictionary The results from Section 2 show that problems with a gram matrix composed of large eigenvalues associated to non sparse eigenvectors are harder to accelerate. Indeed, it is not possible in this case to find a quasi diagonalization of the matrix B that
does not distort the `1 norm. It is possible to generate such a dictionary using Harmonic Analysis. The Discrete Fourier Transform (DFT) distorts a lot the `1 ball, since a very sparse vector in the temporal space is transformed in widely spread spectrum in the Fourier domain. We can thus design a dictionary for which LISTA and FacNet performances should
be degraded. D = ( di/‖di‖2 ) i=1...m is constructed such that dj,k = e −2πijζk , with ( ζk ) k≤n
randomly selected from { 1/m, . . . ,m/2/m } without replacement.
The resulting performances are reported in Figure 3. The first layer provides a big gain by changing the starting point of the iterative methods. It realizes an arbitrage of the tradeoff between starting from 0 and starting from y . But the next layers do not yield any extra gain compared to the original ISTA algorithm. After 4 layers, the cost performance of both adaptive methods and ISTA are equivalent. It is clear that in this case, FacNet does not accelerate efficiently the sparse coding, in accordance with our result from Section 2. LISTA also displays poor performances in this setting. This provides further evidence that FacNet and LISTA share the same acceleration mechanism as adversarial dictionaries for FacNet are also adversarial for LISTA.
3.3 Sparse coding with over complete dictionary on images
Wavelet encoding for natural images A highly structured dictionary composed of translation invariant Haar wavelets is used to encode 8x8 patches of images from the PASCAL VOC 2008 dataset. The network is used to learn an efficient sparse coder for natural images over this family. 500 images are sampled from dataset to train the encoder. Training batches are obtained by uniformly sampling patches from the training image set to feed the stochastic optimization of the network. The encoder is then tested with 10000 patches sampled from 100 new images from the same dataset.
Learned dictionary for MNIST To evaluate the performance of LISTA for dictionary learning, LISTA was used to encode MNIST images over an unconstrained dictionary, learned a priori using classical dictionary learning techniques. The dictionary of 100 atoms was learned from 10000 MNIST images in grayscale rescaled to 17x17 using the implementation of Mairal et al. (2009) proposed in scikit-learn, with λ = 0.05. Then, the networks were trained through backpropagation using all the 60000 images from the training set of MNIST. Finally, the perfornance of these encoders were evaluated with the 10000 images of the training set of MNIST.
The Figure 4 displays the cost performance of the adaptive procedures compared to nonadaptive algorithms. In both scenario, FacNet has performances comparable to the one of LISTA and their behavior are in accordance with the theory developed in Section 2. The gains become smaller for each added layer and the initial gain is achieved for dictionary either structured or unstructured. The MNIST case presents a much larger gain compare to the experiment with natural images. This results from the difference of structure of the input distribution, as the MNIST digits are much more constrained than patches from natural images and the network is able to leverage it to find a better encoder. In the MNIST case, a network composed of 12 layers is sufficient to achieve performance comparable to ISTA with more than 1000 iterations.
4 Conclusions
In this paper we studied the problem of finite computational budget approximation of sparse coding. Inspired by the ability of neural networks to accelerate over splitting methods on the first few iterations, we have studied which properties of the dictionary matrix and the data distribution lead to such acceleration. Our analysis reveals that one can obtain acceleration by finding approximate matrix factorizations of the dictionary which nearly diagonalize its Gram matrix, but whose orthogonal transformations leave approximately invariant the `1 ball. By appropriately balancing these two conditions, we show that the resulting rotated proximal splitting scheme has an upper bound which improves over the ISTA upper bound under appropriate sparsity.
In order to relate this specific factorization property to the actual LISTA algorithm, we have introduced a reparametrization of the neural network that specifically computes the factorization, and incidentally provides reduced learning complexity (less parameters) from the original LISTA. Numerical experiments of Section 3 show that such reparametrization recovers the same gains as the original neural network, providing evidence that our theoretical analysis is partially explaining the behavior of the LISTA neural network. Our acceleration scheme is inherently transient, in the sense that once the iterates are sufficiently close to the optimum, the factorization is not effective anymore. This transient effect is also consistent with the performance observed numerically, although the possibility remains open to find alternative models that further exploit the particular structure of the sparse coding. Finally, we provide evidence that successful matrix factorization is not only sufficient but also necessary for acceleration, by showing that Fourier dictionaries are not accelerated.
Despite these initial results, a lot remains to be understood on the general question of optimal tradeoffs between computational budget and statistical accuracy. Our analysis so far did not take into account any probabilistic consideration (e.g. obtain approximations that hold with high probability or in expectation). Another area of further study is the extension of our analysis to the FISTA case, and more generally to other inference tasks that are currently solved via iterative procedures compatible with neural network parametrizations, such as inference in Graphical Models using Belief Propagation or other ill-posed inverse problems.
A Learned Fista
A similar algorithm can be derived from FISTA, the accelerated version of ISTA to obtain LFISTA (see Figure 5 ). The architecture is very similar to LISTA, now with two memory taps: It introduces a momentum term to improve the convergence rate of ISTA as follows:
1. yk = zk + tk−1 − 1
tk (zk − zk−1) ,
2. zk+1 = h λ L
( yk − 1
L ∇E(yk)
) = h λ
L
( (I− 1
L B)yk +
1 L DTx
) ,
3. tk+1 = 1 + √ 1 + 4t2k 2 .
By substituting the expression for yk into the first equation, we obtain a generic recurrent architecture very similar to LISTA, now with two memory taps, that we denote by LFISTA:
zk+1 = hθ(W (k) g zk +W (k) m zk−1 +W (k) e x) .
This model is equivalent to running K-steps of FISTA when its parameters are initialized with
W (k)g =
( 1 +
tk−1 − 1 tk
)( I− 1
L B
) ,
W (k)m =
( 1− tk−1
tk
)( I− 1
L B
) ,
W (k)e = 1
L DT .
The parameters of this new architecture, presented in Figure 5 , are trained analogously as in the LISTA case.
B Proofs
Lemma B.1. Suppose that R = ATSA−B is positive definite, and define
zk+1 = arg min z F̃ (z, zk) , and (20)
δA(z) = ‖Az‖1 − ‖z‖1. Then we have
F (zk+1)−F (z∗) ≤ 1
2
( (z∗ − zk)TR(z∗ − zk)− (z∗ − zk+1)TR(z∗ − zk+1) ) +〈∂δA(zk+1), zk+1−z∗〉 .
(21)
Proof. We define
f(t) = F ( tzk+1 + (1− t)z∗ ) , t ∈ [0, 1] .
Since F is convex, f is also convex in [0, 1]. Since f(0) = F (z∗) is the global minimum, it results that f ′(t) is increasing in (0, 1], and hence
F (zk+1)− F (z∗) = f(1)− f(0) = ∫ f ′(t)dt ≤ f ′(1) ,
where f ′(1) is any element of ∂f(1). Since δA(z) is a difference of convex functions, its subgradient can be defined as a limit of infimal convolutions Hiriart-Urruty (1991). We have ∂f(1) = 〈∂F (zk+1), zk+1 − z∗〉 , and since ∂F (z) = ∂F̃ (z, zk)−R(z − zk)− ∂δA(z) and 0 ∈ ∂F̃ (zk+1, zk) it results that ∂F (zk+1) = −R(zk+1 − zk)− ∂δA(zk+1) , and thus
F (zk+1)− F (z∗) ≤ (z∗ − zk+1)TR(zk+1 − zk) + 〈∂δA(zk+1), (z∗ − zk+1)〉 . (22) (21) is obtained by observing that
(z∗ − zk+1)TR(zk+1 − zk) ≤ 1
2
( (z∗ − zk)TR(z∗ − zk)− (z∗ − zk+1)TR(z∗ − zk+1) ) , (23)
thanks to the fact that R 0.
Theorem B.2. Let Ak, Sk be the pair of unitary and diagonal matrices corresponding to iteration k, chosen such that Rk = A T kSkAk −B 0. It results that
F (zk)− F (z∗) ≤ (z∗ − z0)TR0(z∗ − z0) + 2〈∇δA0(z1), (z∗ − z1)〉 2k + α− β 2k , with (24)
α = k−1∑ n=1 ( 2〈∇δAn(zn+1), (z∗ − zn+1)〉+ (z∗ − zn)T(Rn−1 −Rn)(z∗ − zn) ) ,
β = k−1∑ n=0 (n+ 1) ( (zn+1 − zn)TRn(zn+1 − zn) + 2δAn(zn+1)− 2δAn(zn) ) .
Proof: The proof is adapted from (Beck & Teboulle, 2009), Theorem 3.1. From Lemma B.1, we start by using (21) to bound terms of the form F (zn)− F (z∗):
F (zn)−F (z∗) ≤ 〈∇δAn(zn+1), (z ∗−zn+1)〉+
1
2
( (z∗ − zn)TRn(z∗ − zn)− (z∗ − zn+1)TRn(z∗ − zn+1) ) .
Adding these inequalities for n = 0 . . . k − 1 we obtaink−1∑ n=0 F (zn) − kF (z∗) ≤ k−1∑ n=0 〈∇δAn(zn+1), (z∗ − zn+1)〉+ (25)
+ 1
2
( (z∗ − z0)TR0(z∗ − z0)− (z∗ − zk)TRk−1(z∗ − zk) ) +
+ 1
2 k−1∑ n=1 (z∗ − zn)T(Rn−1 −Rn)(z∗ − zn) .
On the other hand, we also have
F (zn)− F (zn+1) ≥ F (zn)− F̃ (zn, zn) + F̃ (zn+1, zn)− F (zn+1)
= −δAn(zn) + δAn(zn+1) + 1
2 (zn+1 − zn)TRn(zn+1 − zn) ,
which results in k−1∑ n=0 (n+ 1)(F (zn)− F (zn+1)) ≥ 1 2 k−1∑ n=0 (n+ 1)(zn+1 − zn)TRn(zn+1 − zn) + (26)
+ k−1∑ n=0 (n+ 1) ( δAn(zn+1)− δAn(zn) ) k−1∑ n=0 F (zn) − kF (zk) ≥ k−1∑ n=0 (n+ 1) ( 1 2 (zn+1 − zn)TRn(zn+1 − zn) + δAn(zn+1)− δAn(zn) ) .
Combining (25) and (26) we obtain
F (zk)− F (z∗) ≤ (z∗ − z0)TR0(z∗ − z0) + 2〈∇δA0(z1), (z∗ − z1)〉 2k + α− β 2k (27)
with
α = k−1∑ n=1 ( 2〈∇δAn(zn+1), (z∗ − zn+1)〉+ (z∗ − zn)T(Rn−1 −Rn)(z∗ − zn) ) ,
β = k−1∑ n=0 (n+ 1) ( (zn+1 − zn)TRn(zn+1 − zn) + 2δAn(zn+1)− 2δAn(zn) ) .
Corollary B.3. If Ak = I, Sk = ‖B‖I for k > 0 then F (zk)−F (z∗) ≤ (z∗ − z0)TR0(z∗ − z0) + 2LA0(z1)(‖z∗ − z1‖+ ‖z1 − z0‖) + (z∗ − z1)TR0(z∗ − z1)T
2k .
(28)
Proof: We verify that in that case, Rn−1−Rn ≡ 0 and for n > 1 and δAn ≡ 0 for n > 0 . | 1. What is the focus of the reviewed paper, and how does it relate to the original LISTA proposal?
2. What are the strengths of the paper, particularly in terms of its analysis and insights?
3. Are there any concerns or questions regarding the presentation of the learned dictionary results?
4. How does the reviewer assess the overall contribution and novelty of the paper's content?
5. Is there any suggestion for improving the title of the paper to better reflect its contents? | Review | Review
This work presents an analysis of LISTA, which originally proposes to accelerate sparse coding algorithms with some prior on the structure of the problem. The authors here propose a solid analysis of the acceleration performance of LISTA, using a specific matrix factorisation of the dictionary.
The analysis is well structured, and provides interesting insights. It would have been good to tie more closely these insights to specific properties of data or input distributions.
The learned dictionary results in Section 3.3 are not very clear: is the dictionary learned with a sort of alternating minimisation strategy that would include LISTA as sparse coding step? Or is it only the sparse coding that is studied, with a dictionary that has been learned a priori?
Overall, the paper does not propose a new algorithm and representation, but provides key insights on a well-known and interesting acceleration method on sparse coding. This is quite a nice work. The title seems however a bit confusing as 'neural sparse coding' is actually rather 'LISTA', or 'neural network acceleration of sparse coding' - basically, it is not immediate to understand what 'neural sparse coding' means... |
ICLR | Title
Understanding Trainable Sparse Coding with Matrix Factorization
Abstract
Sparse coding is a core building block in many data analysis and machine learning pipelines. Typically it is solved by relying on generic optimization techniques, such as the Iterative Soft Thresholding Algorithm and its accelerated version (ISTA, FISTA). These methods are optimal in the class of first-order methods for non-smooth, convex functions. However, they do not exploit the particular structure of the problem at hand nor the input data distribution. An acceleration using neural networks, coined LISTA, was proposed in Gregor & Le Cun (2010), which showed empirically that one could achieve high quality estimates with few iterations by modifying the parameters of the proximal splitting appropriately. In this paper we study the reasons for such acceleration. Our mathematical analysis reveals that it is related to a specific matrix factorization of the Gram kernel of the dictionary, which attempts to nearly diagonalise the kernel with a basis that produces a small perturbation of the `1 ball. When this factorization succeeds, we prove that the resulting splitting algorithm enjoys an improved convergence bound with respect to the non-adaptive version. Moreover, our analysis also shows that conditions for acceleration occur mostly at the beginning of the iterative process, consistent with numerical experiments. We further validate our analysis by showing that on dictionaries where this factorization does not exist, adaptive acceleration fails.
1 Introduction
Feature selection is a crucial point in high dimensional data analysis. Different techniques have been developed to tackle this problem efficiently, and amongst them sparsity has emerged as a leading paradigm. In statistics, the LASSO estimator (Tibshirani, 1996) provides a reliable way to select features and has been extensively studied in the last two decades (Hastie et al. (2015) and references therein). In machine learning and signal processing, sparse coding has made its way into several modern architectures, including large scale computer vision (Coates & Ng, 2011) and biologically inspired models (Cadieu & Olshausen, 2012). Also, Dictionary learning is a generic unsupervised learning method to perform nonlinear dimensionality reduction with efficient computational complexity (Mairal et al., 2009). All these techniques heavily rely on the resolution of `1-regularized least squares.
The `1-sparse coding problem is defined as solving, for a given input x ∈ Rn and dictionary D ∈ Rn×m, the following problem:
z∗(x) = arg min z Fx(z) ∆ =
1 2 ‖x−Dz‖2 + λ‖z‖1 . (1)
This problem is convex and can therefore be solved using convex optimization machinery. Proximal splitting methods (Beck & Teboulle, 2009) alternate between the minimization of the smooth and differentiable part using the gradient information and the minimization of the non-differentiable part using a proximal operator (Combettes & Bauschke, 2011). These methods can also be accelerated by considering a momentum term, as it is done in FISTA
∗Work done while appointed at UC Berkeley, Statistics Department (currently on leave)
(Beck & Teboulle, 2009; Nesterov, 2005). Coordinate descent (Friedman et al., 2007; Osher & Li, 2009) leverages the closed formula that can be derived for optimizing the problem (1) for one coordinate zi given that all the other are fixed. At each step of the algorithm, one coordinate is updated to its optimal value, which yields an inexpensive scheme to perform each step. The choice of the coordinate to update at each step is critical for the performance of the optimization procedure. Least Angle Regression (LARS) (Hesterberg et al., 2008) is another method that computes the whole LASSO regularization path. These algorithms all provide an optimization procedure that leverages the local properties of the cost function iteratively. They can be shown to be optimal among the class of first-order methods for generic convex, non-smooth functions (Bubeck, 2014).
But all these results are given in the worst case and do not use the distribution of the considered problem. One can thus wonder whether a more efficient algorithm to solve (1) exists for a fixed dictionary D and generic input x drawn from a certain input data distribution. In Gregor & Le Cun (2010), the authors introduced LISTA, a trained version of ISTA that adapts the parameters of the proximal splitting algorithm to approximate the solution of the LASSO using a finite number of steps. This method exploits the common structure of the problem to learn a better transform than the generic ISTA step. As ISTA is composed of a succession of linear operations and piecewise non linearities, the authors use the neural network framework and the backpropagation to derive an efficient procedure solving the LASSO problem. In Sprechmann et al. (2012), the authors extended LISTA to more generic sparse coding scenarios and showed that adaptive acceleration is possible under general input distributions and sparsity conditions.
In this paper, we are interested in the following question: Given a finite computational budget, what is the optimum estimator of the sparse coding? This question belongs to the general topic of computational tradeoffs in statistical inference. Randomized sketches (Alaoui & Mahoney, 2015; Yang et al., 2015) reduce the size of convex problems by projecting expensive kernel operators into random subspaces, and reveal a tradeoff between computational efficiency and statistical accuracy. Agarwal (2012) provides several theoretical results on perfoming inference under various computational constraints, and Chandrasekaran & Jordan (2013) considers a hierarchy of convex relaxations that provide practical tradeoffs between accuracy and computational cost. More recently, Oymak et al. (2015) provides sharp time-data tradeoffs in the context of linear inverse problems, showing the existence of a phase transition between the number of measurements and the convergence rate of the resulting recovery optimization algorithm. Giryes et al. (2016) builds on this result to produce an analysis of LISTA that describes acceleration in conditions where the iterative procedure has linear convergence rate. Finally, Xin et al. (2016) also studies the capabilities of Deep Neural networks at approximating sparse inference. The authors show that unrolled iterations lead to better approximation if one allows the weights to vary at each layer, contrary to standard splitting algorithms. Whereas their focus is on relaxing the convergence hypothesis of iterative thresholding algorithms, we study a complementary question, namely when is speedup possible, without assuming strongly convex optimization. Their results are consistent with ours, since our analysis also shows that learning shared layer weights is less effective.
Inspired by the LISTA architecture, our mathematical analysis reveals that adaptive acceleration is related to a specific matrix factorization of the Gram matrix of the dictionary B = DTD as B = ATSA−R ,where A is unitary, S is diagonal and the residual is positive semidefinite: R 0. Our factorization balances between near diagonalization by asking that ‖R‖ is small and small perturbation of the `1 norm, i.e. ‖Az‖1 − ‖z‖1 is small. When this factorization succeeds, we prove that the resulting splitting algorithm enjoys a convergence rate with improved constants with respect to the non-adaptive version. Moreover, our analysis also shows that acceleration is mostly possible at the beginning of the iterative process, when the current estimate is far from the optimal solution, which is consistent with numerical experiments. We also show that the existence of this factorization is not only sufficient for acceleration, but also necessary. This is shown by constructing dictionaries whose Gram matrix diagonalizes in a basis that is incoherent with the canonical basis, and verifying that LISTA fails in that case to accelerate with respect to ISTA.
In our numerical experiments, we design a specialized version of LISTA called FacNet, with more constrained parameters, which is then used as a tool to show that our theoretical analysis captures the acceleration mechanism of LISTA. Our theoretical results can be applied to FacNet and as LISTA is a generalization of this model, it always performs at least as well, showing that the existence of the factorization is a sufficient certificate for acceleration by
LISTA. Reciprocally, we show that for cases where no acceleration is possible with FacNet, the LISTA model also fail to provide acceleration, linking the two speedup mechanisms. This numerical evidence suggest that the existence of our proposed factorization is sufficient and somewhat necessary for LISTA to show good results.
The rest of the paper is structured as follows. Section 2 presents our mathematical analysis and proves the convergence of the adaptive algorithm as a function of the quality of the matrix factorization. Finally, Section 3 presents the generic architectures that will enable the usage of such schemes and the numerical experiments, which validate our analysis over a range of different scenarios.
2 Accelerating Sparse Coding with Sparse Matrix Factorizations
2.1 Unitary Proximal Splitting
In this section we describe our setup for accelerating sparse coding based on the Proximal Splitting method. Let Ω ⊂ Rn be the set describing our input data, and D ∈ Rn×m be a dictionary, with m > n. We wish to find fast and accurate approximations of the sparse coding z∗(x) of any x ∈ Ω, defined in (1) For simplicity, we denote B = DTD and y = D†x to rewrite (1) as
z∗(x) = arg min z Fx(z) =
1 2 (y − z)TB(y − z)︸ ︷︷ ︸
E(z)
+λ‖z‖1︸ ︷︷ ︸ G(z) . (2)
For clarity, we will refer to Fx as F and to z ∗(x) as z∗. The classic proximal splitting technique finds z∗ as the limit of sequence (zk)k, obtained by successively constructing a surrogate loss Fk(z) of the form
Fk(z) = E(zk) + (zk − y)TB(z − zk) + Lk‖z − zk‖22 + λ‖z‖1 , (3) satisfying Fk(z) ≥ F (z) for all z ∈ Rm . Since Fk is separable in each coordinate of z, zk+1 = arg minz Fk(z) can be computed efficiently. This scheme is based on a majoration of the quadratic form (y − z)TB(y − z) with an isotropic quadratic form Lk‖zk − z‖22. The convergence rate of the splitting algorithm is optimized by choosing Lk as the smallest constant satisfying Fk(z) ≥ F (z), which corresponds to the largest singular value of B. The computation of zk+1 remains separable by replacing the quadratic form LkI by any diagonal form. However, the Gram matrix B = DTD might be poorly approximated via diagonal forms for general dictionaries. Our objective is to accelerate the convergence of this algorithm by finding appropriate factorizations of the matrix B such that
B ≈ ATSA , and ‖Az‖1 ≈ ‖z‖1 , where A is unitary and S is diagonal positive definite. Given a point zk at iteration k, we can rewrite F (z) as
F (z) = E(zk) + (zk − y)TB(z − zk) +QB(z, zk) , (4)
with QB(v, w) := 1
2 (v − w)TB(v − w) + λ‖v‖1 . For any diago-
nal positive definite matrix S and unitary matrix A, the surrogate loss F̃ (z, zk) := E(zk) + (zk − y)TB(z − zk) +QS(Az,Azk) can be explicitly minimized, since
arg min z F̃ (z, zk) = A T arg min u
( (zk − y)TBAT(u−Azk) +QS(u,Azk) ) = AT arg min
u QS
( u,Azk − S−1AB(zk − y) ) (5)
where we use the variable change u = Az. As S is diagonal positive definite, (5) is separable and can be computed easily, using a linear operation followed by a point-wise non linear soft-thresholding. Thus, any couple (A,S) ensures an computationally cheap scheme. The question is then how to factorize B using S and A in an optimal manner, that is, such that the resulting proximal splitting sequence converges as fast as possible to the sparse coding solution.
2.2 Non-asymptotic Analysis
We will now establish convergence results based on the previous factorization. These bounds will inform us on how to best choose the factors Ak and Sk in each iteration.
For that purpose, let us define δA(z) = λ ( ‖Az‖1 − ‖z‖1 ) , and R = ATSA−B . (6)
The quantity δA(z) thus measures how invariant the `1 norm is to the unitary operator A, whereas R corresponds to the residual of approximating the original Gram matrix B by our factorization ATSA . Given a current estimate zk, we can rewrite
F̃ (z, zk) = F (z) + 1
2 (z − zk)TR(z − zk) + δA(z) . (7)
By imposing that R is a positive semidefinite residual one immediately obtains the following bound.
Proposition 2.1. Suppose that R = ATSA−B is positive definite, and define
zk+1 = arg min z F̃ (z, zk) . (8)
Then F (zk+1)− F (z∗) ≤ 1
2 ‖R‖‖zk − z∗‖22+δA(z∗)− δA(zk+1) . (9)
Proof. By definition of zk+1 and using the fact that R 0 we have
F (zk+1)− F (z∗) ≤ F (zk+1)− F̃ (zk+1, zk) + F̃ (z∗, zk)− F (z∗)
= −1 2 (zk+1 − zk)TR(zk+1 − zk)− δA(zk+1) + 1 2 (z∗ − zk)TR(z∗ − zk) + δA(z∗) ≤ 1 2 (z∗ − zk)TR(z∗ − zk) + ( δA(z ∗)− δA(zk+1) ) .
where the first line results from the definition of zk+1 and the third line makes use of R positiveness.
This simple bound reveals that to obtain fast approximations to the sparse coding it is sufficient to find S and A such that ‖R‖ is small and that the `1 commutation term δA is small. These two conditions will be often in tension: one can always obtain R ≡ 0 by using the Singular Value Decomposition of B = AT0S0A0 and setting A = A0 and S = S0. However, the resulting A0 might introduce large commutation error δA0 . Similarly, as the
absolute value is non-expansive, i.e. ∣∣∣|a| − |b|∣∣∣ ≤ ∣∣a− b∣∣, we have that
|δA(z)| = λ ∣∣∣‖Az‖1 − ‖z‖1∣∣∣ ≤ λ‖(A− I)z‖1 (10)
≤ λ √ 2 max(‖Az‖0, ‖z‖0) · ‖A− I‖ · ‖z‖2 ,
where we have used the Cauchy-Schwartz inequality ‖x‖1 ≤ √ ‖x‖0‖x‖2 in the last equation. In particular, (10) shows that unitary matrices in the neighborhood of I with ‖A− I‖ small have small `1 commutation error δA but can be inappropriate to approximate general B matrix.
The commutation error also depends upon the sparsity of z and Az . If both z and Az are sparse then the commutation error is reduced, which can be achieved if A is itself a sparse unitary matrix. Moreover, since
|δA(z)− δA(z′)| ≤ λ|‖z‖1 − ‖z′‖1|+ λ|‖Az‖1 − ‖Az′‖1|
and |‖z‖1 − ‖z′‖1| ≤ ‖z − z′‖1 ≤ √ ‖z − z′‖0‖z − z′‖2
it results that δA is Lipschitz with respect to the Euclidean norm; let us denote by LA(z) its local Lipschitz constant in z, which can be computed using the norm of the subgradient
in z1. An uniform upper bound for this constant is (1+‖A‖1)λ √ m, but it is typically much smaller when z and Az are both sparse. Equation (8) defines an iterative procedure determined by the pairs {(Ak, Sk)}k. The following theorem uses the previous results to compute an upper bound of the resulting sparse coding estimator.
Theorem 2.2. Let Ak, Sk be the pair of unitary and diagonal matrices corresponding to iteration k, chosen such that Rk = A T kSkAk −B 0. It results that
F (zk)− F (z∗) ≤ (z∗ − z0)TR0(z∗ − z0) + 2LA0(z1)‖z∗ − z1‖2 2k + α− β 2k , (11)
with α = k−1∑ i=1 ( 2LAi(zi+1)‖z ∗ − zi+1‖2 + (z∗ − zi)T(Ri−1 −Ri)(z∗ − zi) ) ,
β = k−1∑ i=0 (i+ 1) ( (zi+1 − zi)TRi(zi+1 − zi) + 2δAi(zi+1)− 2δAi(zi) ) ,
where LA(z) denote the local lipschitz constant of δA at z.
Remarks: If one sets Ak = I and Sk = ‖B‖I for all k ≥ 0, (11) corresponds to the bound of the ISTA algorithm (Beck & Teboulle, 2009).
We can specialize the theorem in the case when A0, S0 are chosen to minimize the bound (9) and Ak = I, Sk = ‖B‖I for k ≥ 1. Corollary 2.3. If Ak = I, Sk = ‖B‖I for k ≥ 1 then
F (zk)−F (z∗) ≤ (z∗ − z0)TR0(z∗ − z0) + 2LA0(z1)(‖z∗ − z1‖+ ‖z1 − z0‖) + (z∗ − z1)TR0(z∗ − z1)T
2k .
(12)
This corollary shows that by simply replacing the first step of ISTA by the modified proximal step detailed in (5), one can obtain an improved bound at fixed k as soon as
2‖R0‖max(‖z∗−z0‖22, ‖z∗−z1‖22)+4LA0(z1) max(‖z∗−z0‖2, ‖z∗−z1‖2) ≤ ‖B‖‖z∗−z0‖22 , which, assuming ‖z∗ − z0‖2 ≥ ‖z∗ − z1‖2, translates into
‖R0‖+ 2 LA0(z1) ‖z∗ − z0‖2 ≤ ‖B‖ 2 . (13)
More generally, given a current estimate zk, searching for a factorization (Ak, Sk) will improve the upper bound when
‖Rk‖+ 2 LAk(zk+1) ‖z∗ − zk‖2 ≤ ‖B‖ 2 . (14)
We emphasize that this is not a guarantee of acceleration, since it is based on improving an upper bound. However, it provides a simple picture on the mechanism that makes non-asymptotic acceleration possible.
2.3 Interpretation
In this section we analyze the consequences of Theorem 2.2 in the design of fast sparse coding approximations, and provide a possible explanation for the behavior observed numerically.
2.3.1 ‘Phase Transition” and Law of Diminishing Returns
(14) reveals that the optimum matrix factorization in terms of minimizing the upper bound depends upon the current scale of the problem, that is, of the distance ‖z∗ − zk‖. At the beginning of the optimization, when ‖z∗ − zk‖ is large, the bound (14) makes it easier to explore the space of factorizations (A,S) with A further away from the identity. Indeed, the bound tolerates larger increases in LA(zk+1), which is dominated by
LA(zk+1) ≤ λ( √ ‖zk+1‖0 + √ ‖Azk+1‖0) ,
1 This quantity exists as δA is a difference of convex. See proof of ?? in appendices for precisions.
i.e. the sparsity of both z1 and A0(z1). On the other hand, when we reach intermediate solutions zk such that ‖z∗ − zk‖ is small with respect to LA(zk+1), the upper bound is minimized by choosing factorizations where A is closer and closer to the identity, leading to the non-adaptive regime of standard ISTA (A = Id).
This is consistent with the numerical experiments, which show that the gains provided by learned sparse coding methods are mostly concentrated in the first iterations. Once the estimates reach a certain energy level, section 3 shows that LISTA enters a steady state in which the convergence rate matches that of standard ISTA.
The natural follow-up question is to determine how many layers of adaptive splitting are sufficient before entering the steady regime of convergence. A conservative estimate of this quantity would require an upper bound of ‖z∗− zk‖ from the energy bound F (zk)−F (z∗). Since in general F is convex but not strongly convex, such bound does not exist unless one can assume that F is locally strongly convex (for instance for sufficiently small values of F ).
2.3.2 Improving the factorization to particular input distributions
Given an input dataset D = (xi, z(0)i , z∗i )i≤N , containing examples xi ∈ R n, initial estimates z (0) i and sparse coding solutions z ∗ i , the factorization adapted to D is defined as
min A,S; ATA=I,ATSA−B 0
1
N ∑ i≤N 1 2 (z (0) i − z ∗ i ) T(ATSA−B)(z(0)i − z ∗ i ) + δA(z ∗ i )− δA(z1,i) . (15)
Therefore, adapting the factorization to a particular dataset, as opposed to enforcing it uniformly over a given ball B(z∗;R) (where the radius R ensures that the initial value z0 ∈ B(z∗;R)), will always improve the upper bound (9). Studying the gains resulting from the adaptation to the input distribution will be let for future work.
3 Numerical Experiments
This section provides numerical arguments to analyse adaptive optimization algorithms and their performances, and relates them to the theoretical properties developed in the previous section. All the experiments were run using Python and Tensorflow. For all the experiments, the training is performed using Adagrad (Duchi et al., 2011). The code to reproduce the figures is available online2.
3.1 Adaptive Optimization Networks Architectures
LISTA/LFISTA In Gregor & Le Cun (2010), the authors introduced LISTA, a neural network constructed by considering ISTA as a recurrent neural net. At each step, ISTA performs the following 2-step procedure :
1. uk+1 = zk − 1
L DT(Dzk − x) = (I−
1
L DTD)︸ ︷︷ ︸
Wg
zk + 1
L DT︸ ︷︷ ︸ We x ,
2. zk+1 = h λ L
(uk+1) where hθ(u) = sign(u)(|u| − θ)+ , step k of ISTA (16) 2The code can be found at https://github.com/tomMoral/AdaptiveOptim
This procedure combines a linear operation to compute uk+1 with an element-wise non linearity. It can be summarized as a recurrent neural network, presented in Figure 1a., with tied weights. The autors in Gregor & Le Cun (2010) considered the architecture ΦKΘ with parameters Θ = (W (k) g ,W (k) e , θ(k))k=1,...K obtained by unfolding K times the recurrent network, as presented in Figure 1b. The layers φkΘ are defined as
zk+1 = φ k Θ(zk) := hθ(Wgzk +Wex) . (17)
If W (k) g = I − D TD L , W (k) e = DT L and θ (k) = λL are fixed for all the K layers, the output of this neural net is exactly the vector zK resulting from K steps of ISTA. With LISTA, the parameters Θ are learned using back propagation to minimize the cost function:
f(Θ) = Ex [ Fx(Φ K Θ (x)) ] .
A similar algorithm can be derived from FISTA, the accelerated version of ISTA to obtain LFISTA (see Figure 5 in Appendix A ). The architecture is very similar to LISTA, now with two memory tapes:
zk+1 = hθ(Wgzk +Wmzk−1 +Wex) .
Factorization network Our analysis in Section 2 suggests a refactorization of LISTA in more a structured class of parameters. Following the same basic architecture, and using (5), the network FacNet, ΨKΘ is formed using layers such that:
zk+1 = ψ k Θ(zk) := A ThλS−1(Azk − S−1A(DTDzk −DTx)) , (18) with S diagonal and A unitary, the parameters of the k-th layer. The parameters obtained after training such a network with back-propagation can be used with the theory developed in Section 2. Up to the last linear operation AT of the network, this network is a re-parametrization of LISTA in a more constrained parameter space. Thus, LISTA is a generalization of this proposed network and should have performances at least as good as FacNet, for a fixed number of layers.
The optimization can also be performed using backpropagation. To enforce the unitary constraints on A(k), the cost function is modified with a penalty:
f(Θ) = Ex [ Fx(Ψ K Θ (x)) ] + µ
K K∑ k=1 ∥∥∥∥∥I− (A(k))T A(k) ∥∥∥∥∥ 2
2
, (19)
with Θ = (A(k), S(k))k=1...K the parameters of the K layers and µ a scaling factor for the regularization. The resulting matrix A(k) is then projected on the Stiefel Manifold using a SVD to obtain final parameters, coherent with the network structure.
Linear model Finally, it is important to distinguish the performance gain resulting from choosing a suitable starting point and the acceleration from our model. To highlights the gain obtain by changing the starting point, we considered a linear model with one layer such that zout = A
(0)x. This model is learned using SGD with the convex cost function f(A(0)) = ‖(I−DA(0))x‖22 + λ‖A(0)x‖1 . It computes a tradeoff between starting from the sparsest point 0 and a point with minimal reconstruction error y . Then, we observe the performance of the classical iteration of ISTA using zout as a stating point instead of 0 .
3.2 Synthetic problems with known distributions
Gaussian dictionary In order to disentangle the role of dictionary structure from the role of data distribution structure, the minimization problem is tested using a synthetic generative model with no structure in the weights distribution. First, m atoms di ∈ Rn are drawn iid from a multivariate Gaussian with mean 0 and covariance In and the dictionary
D is defined as ( di/‖di‖2 ) i=1...m . The data points are generated from its sparse codes following a Bernoulli-Gaussian model. The coefficients z = (z1, . . . , zm) are constructed with zi = biai, where bi ∼ B(ρ) and ai ∼ N (0, σIm) , where ρ controls the sparsity of the data. The values are set to m=100, n=64 for the dictionary dimension, ρ = 5/m for the sparsity level and σ=10 for the activation coefficient generation parameters. The sparsity
regularization is set to λ=0.01. The batches used for the training are generated with the model at each step and the cost function is evaluated over a fixed test set, not used in the training.
Figure 2 displays the cost performance for methods ISTA/FISTA/Linear relatively to their iterations and for methods LISTA/LFISTA/FacNet relatively to the number of layers used to solve our generated problem. Linear has performances comparable to learned methods with the first iteration but a gap appears as the number of layers increases, until a point where it achieves the same performances as non adaptive methods. This highlights that the adaptation is possible in the subsequent layers of the networks, going farther than choosing a suitable starting point for iterative methods. The first layers permit to achieve a large gain over the classical optimization strategy, by leveraging the structure of the problem. This appears even with no structure in the sparsity patterns of input data, in accordance with the results in the previous section. We also observe diminishing returns as the number of layers increases. This results from the phase transition described in Subsubsection 2.3.1, as the last layers behave as ISTA steps and do not speed up the convergence. The 3 learned algorithms are always performing at least as well as their classical counterpart, as it was stated in Theorem 2.2. We also explored the effect of the sparsity level in the training and learning of adaptive networks. In the denser setting, the arbitrage between the `1-norm and the squared error is easier as the solution has a lot of non zero coefficients. Thus in this setting, the approximate method is more precise than in the very sparse setting where the approximation must perform a fine selection of the coefficients. But it also yield lower gain at the beggining as the sparser solution can move faster.
There is a small gap between LISTA and FacNet in this setup. This can be explained from the extra constraints on the weights that we impose in the FacNet, which effectively reduce the parameter space by half. Also, we implement the unitary constraints on the matrix A by a soft regularization (see (19)), involving an extra hyper-parameter µ that also contributes to the small performance gap. In any case, these experiments show that our analysis accounts for most of the acceleration provided by LISTA, as the performance of both methods are similar, up to optimization errors.
Adversarial dictionary The results from Section 2 show that problems with a gram matrix composed of large eigenvalues associated to non sparse eigenvectors are harder to accelerate. Indeed, it is not possible in this case to find a quasi diagonalization of the matrix B that
does not distort the `1 norm. It is possible to generate such a dictionary using Harmonic Analysis. The Discrete Fourier Transform (DFT) distorts a lot the `1 ball, since a very sparse vector in the temporal space is transformed in widely spread spectrum in the Fourier domain. We can thus design a dictionary for which LISTA and FacNet performances should
be degraded. D = ( di/‖di‖2 ) i=1...m is constructed such that dj,k = e −2πijζk , with ( ζk ) k≤n
randomly selected from { 1/m, . . . ,m/2/m } without replacement.
The resulting performances are reported in Figure 3. The first layer provides a big gain by changing the starting point of the iterative methods. It realizes an arbitrage of the tradeoff between starting from 0 and starting from y . But the next layers do not yield any extra gain compared to the original ISTA algorithm. After 4 layers, the cost performance of both adaptive methods and ISTA are equivalent. It is clear that in this case, FacNet does not accelerate efficiently the sparse coding, in accordance with our result from Section 2. LISTA also displays poor performances in this setting. This provides further evidence that FacNet and LISTA share the same acceleration mechanism as adversarial dictionaries for FacNet are also adversarial for LISTA.
3.3 Sparse coding with over complete dictionary on images
Wavelet encoding for natural images A highly structured dictionary composed of translation invariant Haar wavelets is used to encode 8x8 patches of images from the PASCAL VOC 2008 dataset. The network is used to learn an efficient sparse coder for natural images over this family. 500 images are sampled from dataset to train the encoder. Training batches are obtained by uniformly sampling patches from the training image set to feed the stochastic optimization of the network. The encoder is then tested with 10000 patches sampled from 100 new images from the same dataset.
Learned dictionary for MNIST To evaluate the performance of LISTA for dictionary learning, LISTA was used to encode MNIST images over an unconstrained dictionary, learned a priori using classical dictionary learning techniques. The dictionary of 100 atoms was learned from 10000 MNIST images in grayscale rescaled to 17x17 using the implementation of Mairal et al. (2009) proposed in scikit-learn, with λ = 0.05. Then, the networks were trained through backpropagation using all the 60000 images from the training set of MNIST. Finally, the perfornance of these encoders were evaluated with the 10000 images of the training set of MNIST.
The Figure 4 displays the cost performance of the adaptive procedures compared to nonadaptive algorithms. In both scenario, FacNet has performances comparable to the one of LISTA and their behavior are in accordance with the theory developed in Section 2. The gains become smaller for each added layer and the initial gain is achieved for dictionary either structured or unstructured. The MNIST case presents a much larger gain compare to the experiment with natural images. This results from the difference of structure of the input distribution, as the MNIST digits are much more constrained than patches from natural images and the network is able to leverage it to find a better encoder. In the MNIST case, a network composed of 12 layers is sufficient to achieve performance comparable to ISTA with more than 1000 iterations.
4 Conclusions
In this paper we studied the problem of finite computational budget approximation of sparse coding. Inspired by the ability of neural networks to accelerate over splitting methods on the first few iterations, we have studied which properties of the dictionary matrix and the data distribution lead to such acceleration. Our analysis reveals that one can obtain acceleration by finding approximate matrix factorizations of the dictionary which nearly diagonalize its Gram matrix, but whose orthogonal transformations leave approximately invariant the `1 ball. By appropriately balancing these two conditions, we show that the resulting rotated proximal splitting scheme has an upper bound which improves over the ISTA upper bound under appropriate sparsity.
In order to relate this specific factorization property to the actual LISTA algorithm, we have introduced a reparametrization of the neural network that specifically computes the factorization, and incidentally provides reduced learning complexity (less parameters) from the original LISTA. Numerical experiments of Section 3 show that such reparametrization recovers the same gains as the original neural network, providing evidence that our theoretical analysis is partially explaining the behavior of the LISTA neural network. Our acceleration scheme is inherently transient, in the sense that once the iterates are sufficiently close to the optimum, the factorization is not effective anymore. This transient effect is also consistent with the performance observed numerically, although the possibility remains open to find alternative models that further exploit the particular structure of the sparse coding. Finally, we provide evidence that successful matrix factorization is not only sufficient but also necessary for acceleration, by showing that Fourier dictionaries are not accelerated.
Despite these initial results, a lot remains to be understood on the general question of optimal tradeoffs between computational budget and statistical accuracy. Our analysis so far did not take into account any probabilistic consideration (e.g. obtain approximations that hold with high probability or in expectation). Another area of further study is the extension of our analysis to the FISTA case, and more generally to other inference tasks that are currently solved via iterative procedures compatible with neural network parametrizations, such as inference in Graphical Models using Belief Propagation or other ill-posed inverse problems.
A Learned Fista
A similar algorithm can be derived from FISTA, the accelerated version of ISTA to obtain LFISTA (see Figure 5 ). The architecture is very similar to LISTA, now with two memory taps: It introduces a momentum term to improve the convergence rate of ISTA as follows:
1. yk = zk + tk−1 − 1
tk (zk − zk−1) ,
2. zk+1 = h λ L
( yk − 1
L ∇E(yk)
) = h λ
L
( (I− 1
L B)yk +
1 L DTx
) ,
3. tk+1 = 1 + √ 1 + 4t2k 2 .
By substituting the expression for yk into the first equation, we obtain a generic recurrent architecture very similar to LISTA, now with two memory taps, that we denote by LFISTA:
zk+1 = hθ(W (k) g zk +W (k) m zk−1 +W (k) e x) .
This model is equivalent to running K-steps of FISTA when its parameters are initialized with
W (k)g =
( 1 +
tk−1 − 1 tk
)( I− 1
L B
) ,
W (k)m =
( 1− tk−1
tk
)( I− 1
L B
) ,
W (k)e = 1
L DT .
The parameters of this new architecture, presented in Figure 5 , are trained analogously as in the LISTA case.
B Proofs
Lemma B.1. Suppose that R = ATSA−B is positive definite, and define
zk+1 = arg min z F̃ (z, zk) , and (20)
δA(z) = ‖Az‖1 − ‖z‖1. Then we have
F (zk+1)−F (z∗) ≤ 1
2
( (z∗ − zk)TR(z∗ − zk)− (z∗ − zk+1)TR(z∗ − zk+1) ) +〈∂δA(zk+1), zk+1−z∗〉 .
(21)
Proof. We define
f(t) = F ( tzk+1 + (1− t)z∗ ) , t ∈ [0, 1] .
Since F is convex, f is also convex in [0, 1]. Since f(0) = F (z∗) is the global minimum, it results that f ′(t) is increasing in (0, 1], and hence
F (zk+1)− F (z∗) = f(1)− f(0) = ∫ f ′(t)dt ≤ f ′(1) ,
where f ′(1) is any element of ∂f(1). Since δA(z) is a difference of convex functions, its subgradient can be defined as a limit of infimal convolutions Hiriart-Urruty (1991). We have ∂f(1) = 〈∂F (zk+1), zk+1 − z∗〉 , and since ∂F (z) = ∂F̃ (z, zk)−R(z − zk)− ∂δA(z) and 0 ∈ ∂F̃ (zk+1, zk) it results that ∂F (zk+1) = −R(zk+1 − zk)− ∂δA(zk+1) , and thus
F (zk+1)− F (z∗) ≤ (z∗ − zk+1)TR(zk+1 − zk) + 〈∂δA(zk+1), (z∗ − zk+1)〉 . (22) (21) is obtained by observing that
(z∗ − zk+1)TR(zk+1 − zk) ≤ 1
2
( (z∗ − zk)TR(z∗ − zk)− (z∗ − zk+1)TR(z∗ − zk+1) ) , (23)
thanks to the fact that R 0.
Theorem B.2. Let Ak, Sk be the pair of unitary and diagonal matrices corresponding to iteration k, chosen such that Rk = A T kSkAk −B 0. It results that
F (zk)− F (z∗) ≤ (z∗ − z0)TR0(z∗ − z0) + 2〈∇δA0(z1), (z∗ − z1)〉 2k + α− β 2k , with (24)
α = k−1∑ n=1 ( 2〈∇δAn(zn+1), (z∗ − zn+1)〉+ (z∗ − zn)T(Rn−1 −Rn)(z∗ − zn) ) ,
β = k−1∑ n=0 (n+ 1) ( (zn+1 − zn)TRn(zn+1 − zn) + 2δAn(zn+1)− 2δAn(zn) ) .
Proof: The proof is adapted from (Beck & Teboulle, 2009), Theorem 3.1. From Lemma B.1, we start by using (21) to bound terms of the form F (zn)− F (z∗):
F (zn)−F (z∗) ≤ 〈∇δAn(zn+1), (z ∗−zn+1)〉+
1
2
( (z∗ − zn)TRn(z∗ − zn)− (z∗ − zn+1)TRn(z∗ − zn+1) ) .
Adding these inequalities for n = 0 . . . k − 1 we obtaink−1∑ n=0 F (zn) − kF (z∗) ≤ k−1∑ n=0 〈∇δAn(zn+1), (z∗ − zn+1)〉+ (25)
+ 1
2
( (z∗ − z0)TR0(z∗ − z0)− (z∗ − zk)TRk−1(z∗ − zk) ) +
+ 1
2 k−1∑ n=1 (z∗ − zn)T(Rn−1 −Rn)(z∗ − zn) .
On the other hand, we also have
F (zn)− F (zn+1) ≥ F (zn)− F̃ (zn, zn) + F̃ (zn+1, zn)− F (zn+1)
= −δAn(zn) + δAn(zn+1) + 1
2 (zn+1 − zn)TRn(zn+1 − zn) ,
which results in k−1∑ n=0 (n+ 1)(F (zn)− F (zn+1)) ≥ 1 2 k−1∑ n=0 (n+ 1)(zn+1 − zn)TRn(zn+1 − zn) + (26)
+ k−1∑ n=0 (n+ 1) ( δAn(zn+1)− δAn(zn) ) k−1∑ n=0 F (zn) − kF (zk) ≥ k−1∑ n=0 (n+ 1) ( 1 2 (zn+1 − zn)TRn(zn+1 − zn) + δAn(zn+1)− δAn(zn) ) .
Combining (25) and (26) we obtain
F (zk)− F (z∗) ≤ (z∗ − z0)TR0(z∗ − z0) + 2〈∇δA0(z1), (z∗ − z1)〉 2k + α− β 2k (27)
with
α = k−1∑ n=1 ( 2〈∇δAn(zn+1), (z∗ − zn+1)〉+ (z∗ − zn)T(Rn−1 −Rn)(z∗ − zn) ) ,
β = k−1∑ n=0 (n+ 1) ( (zn+1 − zn)TRn(zn+1 − zn) + 2δAn(zn+1)− 2δAn(zn) ) .
Corollary B.3. If Ak = I, Sk = ‖B‖I for k > 0 then F (zk)−F (z∗) ≤ (z∗ − z0)TR0(z∗ − z0) + 2LA0(z1)(‖z∗ − z1‖+ ‖z1 − z0‖) + (z∗ − z1)TR0(z∗ − z1)T
2k .
(28)
Proof: We verify that in that case, Rn−1−Rn ≡ 0 and for n > 1 and δAn ≡ 0 for n > 0 . | 1. What is the focus of the paper regarding neural sparse coding?
2. What are the strengths and weaknesses of the proposed method compared to LISTA?
3. How does the reviewer assess the clarity and quality of the paper's content?
4. What are the minor comments and typos mentioned by the reviewer? | Review | Review
This paper proposes a method for neural sparse coding inspired by LISTA (Gregor and LeCun 2010). A theoretical analysis is presented that attempts to explain the non-asymptotic acceleration property of LISTA (via Theorem 2.2. and Corollary 2.3).
FacNet is a specialization of LISTA, sharing the same network architecture but with additional constraints on the parameters. In numerical experiments, LISTA outperforms FacNet, up to some optimization errors. It is not clear what is the advantage of using FacNet instead of LISTA.
Overall, the paper lacks clarity in several parts. It would be good to state beforehand what the main contribution is. As stated in the clarification question/answer below, this paper would benefit from a more clear explanation about the connection of FacNet with LISTA.
Minor comments/typos:
- p. 6: "memory taps" -> tapes?
- sec 3.2: "a gap appears has the number of iterations increases" -> as?
- sec. 4: "numerical experiments of 3" -> of sec 3 |
ICLR | Title
Starfire: Regularization-Free Adversarially-Robust Structured Sparse Training
Abstract
This paper studies structured sparse training of CNNs with a gradual pruning technique that leads to fixed, sparse weight matrices after a set number of epochs. We simplify the structure of the enforced sparsity so that it reduces overhead caused by regularization. The proposed training methodology explores several options for structured sparsity. We study various tradeoffs with respect to pruning duration, learning-rate configuration, and the total length of training. We show that our method creates a sparse version of ResNet50 and ResNet50v1.5 on full ImageNet while remaining within a negligible <1% margin of accuracy loss. To ensure that this type of sparse training does not harm the robustness of the network, we also demonstrate how the network behaves in the presence of adversarial attacks. Our results show that with 70% target sparsity, over 75% top-1 accuracy is achievable.
N/A
This paper studies structured sparse training of CNNs with a gradual pruning technique that leads to fixed, sparse weight matrices after a set number of epochs. We simplify the structure of the enforced sparsity so that it reduces overhead caused by regularization. The proposed training methodology explores several options for structured sparsity.
We study various tradeoffs with respect to pruning duration, learning-rate configuration, and the total length of training. We show that our method creates a sparse version of ResNet50 and ResNet50v1.5 on full ImageNet while remaining within a negligible <1% margin of accuracy loss. To ensure that this type of sparse training does not harm the robustness of the network, we also demonstrate how the network behaves in the presence of adversarial attacks. Our results show that with 70% target sparsity, over 75% top-1 accuracy is achievable.
1 INTRODUCTION
Pruning weights can compress a network into a smaller model so that the model can fit into faster/smaller memory and therefore result in execution speedups (Han et al., 2016; 2015a). To increase the accuracy Han et al. (2015b) and Mao et al. (2017) explore training the network dense after pruning. The resulting network can maintain accuracy based on the specified level of sparsity (Mostafa & Wang, 2019; Zhu & Gupta, 2017; Han et al., 2015a).
Structured sparsity has been explored for RNNs and also CNNs where a certain number of nonzeros is allowed across various cross-sections of the weight tensors. These methods aim to speed up computation and reach some final level of sparsity for deployment. Narang et al. (2017) have shown promising results for structured training of RNNs while sparse CNNs could not achieve the same performance (Mao et al., 2017).
Recent work has demonstrated that structurally sparse training can speed up execution on GPUs (He et al., 2017; Lym et al., 2019; Zhu & Gupta, 2017). However, these training mechanisms add regularization and computational overhead to eliminate unnecessary weights. The regularization term modifies the original training and can be expensive in hardware. While enforcing coarse-grain sparsity Lym et al. (2019) provides significant speedups, the final network contains an insufficient degree of sparsity for deployment on edge devices.
Mostafa & Wang (2019) show that with adaptive sparse training and dynamic reallocation of nonzeros sparsity levels up to 80% can be achieved. However, even though an additional 10 epochs of training are required, an accuracy loss of around 1-2% is still observed. The main drawback is the overhead incurred while implementing such technique on the target platform. Continuous reconfiguration of the sparsity pattern is expensive as it does not allow for compression of weights during training.
To achieve speedups and a desired final degree of sparsity, we aim to apply the techniques in Han et al. (2015b) and Mao et al. (2017) at earlier stages in training at higher frequency within a period which we call the pruning era, usually a period of 20-30 epochs. During the pruning era, with fine granularity of at most a kernel size, we exploit one of the three proposed sparsity regimes. Subsequently, we fix the mask for the rest of the training to speed it up. Our motivation came from the insight that having a fixed sparse multiply-accumulate pattern allows weight compression during training and can save compute and energy in hardware (Han et al., 2016).
We explore the impact of various pruning granularities, sparsity levels, and learning-rate schedules on the network’s convergence as well as adversarial robustness for CNNs like Resnet-50 (He et al., 2015) on ImageNet and tinyImagenet (CS231N, 2015).
Recent literature has shown that adversarial attacks are more successful on pruned neural networks than they are on regular neural networks (Wang et al., 2018). Given the danger of adversarial attacks in real world situations, we find that it is important to evaluate our sparsity techniques under adversarial robustness. We leverage the FGSM mechanism (Goodfellow et al., 2014) to evaluate the adversarial robustness on our sparse models. This paper makes the following contributions:
1. We propose a mechanism to train and prune a convolutional network during the earlier stages of training such that this sparsity can be harvested for the computational speedups. To do this, we fix the sparse weight masks for the remainder of the training.
2. For fully connected sparsification, we eliminate blocks of fully connected weights based on their connection to the zeros in the previous convolutional layer.
3. We enforce structural, regularization free, magnitude-based pruning across two distinct dimensions and a combined version. These dimensions are inside convolution window R ×S and across input/output feature matrix (C K ).
4. Our sparse models are as robust to adversarial FGSM attacks as fully dense models.
5. We demonstrate that early stage dense training is crucial for maintaining high accuracy.
6. The proposed technique is tolerant to sparsity levels of up to 60-70% with under 1% accuracy degradation. We can compensate by scheduling an extra learning rate drop and training for an extra 10 epochs.
The rest of the paper is organized as follows. Section 2 explains our pruning methodology. Section 3 describes the experimental setup framework. Section 4 presents results and discusses their interpretation. Section 5 presents the related work. Section 6 concludes the paper.
2 PRUNING METHODOLOGY
Our proposed pruning mechanism works by always pruning the weights of smallest magnitude after each weight update. After a forward and backward pass (one batch update), the model is pruned. If a weight is already zero, the gradient is also set to zero. This means that once a weight becomes zero, it will remain zero for the rest of the training period.
This mechanism is similar to Han et al. (2015b), except that we only prune in the earlier stages of the training as opposed to post training. Additionally, this work is similar to Narang et al. (2017) although we set the sparsity threshold instead of using a heuristic to calculate it. We chose this pruning mechanism because of its negligible computational overhead.
In our pruning algorithm, the sparsity threshold refers to the percentage of weights in the network that are currently pruned. Before or during the first epoch of pruning, we will have a sparsity threshold of zero. As we continue training, we gradually increase the sparsity threshold so that by the final epoch of pruning the network sparsity will have reached our final, desired threshold. Finally, we also define the pruning era to be the epochs between the first and final epochs of pruning depicted in Figure 1b.
We evaluate the pruning mask after every training step until we reach the final epoch of pruning. After the final epoch, the pruned values in the network will remain zero for the rest of training; no new pruning will occur, and only the non-zero weights will be updated.
2.1 PRUNING METHODOLOGY BY LAYER
Pruning the smallest magnitude weights in the entire network is inefficient because it involves sorting the weights over the network. Instead, we prune the smallest magnitude weights or sum of weights, within a certain locale of the network. When pruning, we examine each layer individually and apply a separate technique to evaluate which weights to prune, depending on the type of layer we are currently pruning.
2.1.1 CONVOLUTIONAL LAYER PRUNING
Window pruning for 3x3 Convolutional Layers Figure 2a shows the result of a pruned 3×3 convolutional weight tensor under the window pruning method. In this scheme, window layer pruning refers to pruning of weights within the 3×3 convolution kernels. We allow a maximum number of non-zero values for each kernel in the 3×3 convolutional layers and eliminate the weights of smallest magnitude.
Algorithm 1 CK Pruning Algorithm
generate_ck_sparsity_mask(θl ayer , sparsity_threshold): for θ in θl ayer do
for all c in C do for all k in K do
kernel_maxc,k = max(θc,k ) end for cutoff_index = size(θc ) ∗ sparsity_threshold n = max(cutoff_index, size(θc ) − max_non_zero − 1) cutoff_value = nth largest value in kernel_maxc for all k in K do
maskc,k = 1 if kernel_maxc,k > cutoff_value, else 0 end for
end for end for
CK Pruning Methodology Figure 2b shows the result of a pruned 3×3 convolutional weight tensor under the CK pruning method. In this scheme, the weights of a certain layer can be viewed
as a CK matrix of R×S kernels. The CK pruning method involves pruning the 3×3 convolutions along the channel and kernel dimensions of each convolutional filter, i.e., we prune whole kernels (CK matrix of R×S windows) at once and can ultimately prune all the input channels in an output channel. As defined by Algorithm 1, we determine which filter to prune by examining the max of the magnitudes of all the weights in a kernel, which is the max of nine weights. This max is used to evaluate whether the whole kernel should be pruned or not.
Combined Pruning Methodology To combine window and CK pruning, we introduce an intraepoch combined pruning method, which we refer to hereafter as “intra-epoch pruning” or “intra”, for short. As shown by appendix Algorithm 4 in the Appendix, in a given epoch we first apply window pruning to each 3×3 convolutional layer at a fraction of the sparsity threshold for that epoch. Then, we prune the remaining fraction of the sparsity threshold with CK Pruning.
2.1.2 FULLY CONNECTED PRUNING
Like pruning for convolutional layers, we apply a two-tier pruning scheme from Mao et al. (2017) for fully connected layers: micro-level pruning within a block and macro-level pruning that eliminates entire blocks.
Block FC Pruning Figure 2d refers to pruning of individual blocks. Here, we prune an entire n×n (n<5) window within the dense layer and create coarse grained sparsity. To do this, we sum the magnitude of the weights in each window and prune the windows with the smallest magnitude.
Fine FC Pruning Figure 2c refers to the pruning of individual weights. Here, we prune the individual weights in the entire FC Layer, where we compare the magnitude of all the weights to each other.
The produced zero patterns in the last convolution layer allow for eliminating more weights in fully connected layer as depicted in Figure 3. If all the C windows for a specific Ki are zeros, the output activation for the corresponding Ki is also zero. The corresponding neurons in the following fully connected layer are therefore receiving zero input activations and can be eliminated along with their associated weights. This enables us to get sparsity without having to evaluate the weights in the fully connected layer.
When pruning just the small weights in the FC layer, one can inadvertently cut off relevant connections between the input and output layers. Accordingly, we structure the pruning mechanism such that each output neuron should be influenced by the input. This means every column in the weight matrix of the fully connected layer in Figure 3 has at least one non-zero element.
3 EXPERIMENTAL SETUP
To validate each type of pruning (window, CK, or intra-epoch) we selected ResNet50 (He et al., 2015) v1 and v1.5 with the ImageNet and/or Tiny-ImageNet (CS231N, 2015) datasets. We evaluated each pruning method by varying sparsity levels and pruning era.
We experimented with ResNet50 v1.5, in addition to v1, to explore how changing the network structure would affect the top-1 accuracy. For window pruning, we tested with ResNet50v1 on Tiny-ImageNet as well as ResNet50v1 and v1.5 on ImageNet to compare the impact of strided convolutions on our sparse training. Also, we experimented with the learning rate schedule of the training regime. Our typical schedule for ResNet50v1.5 included learning rate drops at epochs 30,
60, and 90, but we experimented with placing the last drop at epoch 80 instead. Unlike typical ResNet50 training, which uses a batch size of 256 and starts the learning rate at 0.1, we used batch size 64 as this is what could fit in our GPUs. As suggested by Krizhevsky (2014), we scaled the starting learning rate by 1p
4 = 12 to 0.05 in order to compensate for the smaller batch size.
3.1 SPARSE TRAINING EXPERIMENTS
For ResNet50v1 and Tiny-ImageNet, we did gradual pruning until epoch 10. We subsequently enforced the final sparsity requirement, set a maximum number of non-zero values in each window/kernel of each layer, and fixed this sparsity pattern for the rest of training. We chose the 10th epoch as the final epoch of pruning because we wanted to see if we could fix the sparsity mask early in the training process.
For ResNet50v1 and ImageNet, our goal was to start pruning as early as possible, while maintaining high accuracy. We set our pruning era to epochs 0-30. Our hypothesis was that 30th epoch would be a suitable epoch to stop pruning, because this is where the learning rate is first decreased, in addition, there would be a large drop in accuracy if we stop pruning at epoch 20. However, this schedule did not perform well for ResNet50v1.5 and ImageNet and therefore we set our pruning era to 30-50.
To test training using CK and intra-epoch pruning, we mainly used ResNet50v1 and ResNet50v1.5 with ImageNet, but also performed CK pruning on ResNet50v1 and Tiny-ImageNet. We adopted a similar approach to Han et al. (2015b) to train with CK or intra-epoch pruning by setting the first epoch of pruning to 20, 30, or 40 with a pruning era of 20 or 30 epochs. Then, we continued to train the sparsified network until the final epoch.
3.2 ADVERSARIAL ROBUSTNESS
Since there was evidence that increasing sparsity lowers adversarial robustness (Wang et al., 2018), we evaluated this robustness in our models. To do so, we applied Fast Gradient Sign Method (FGSM) attacks defined in Goodfellow et al. (2014) on one of our sparse models, to generate its own adversarial examples, and measured the validation accuracy again. We used the same validation set as ImageNet and applied the attack’s image transformation to each input image. Moreover, we experimented with a variety of different ² in order to see how our accuracy decayed. Lastly, in our experiments we leveraged the examples provided in Pytorch tutorials 1.
4 RESULTS
4.1 RESNET50 ON TINY-IMAGENET
From our experiments with Tiny-Imagenet (shown in Appendix in Table 5), we see that even with up to 80% sparsity, both window and CK pruning are able to achieve levels of accuracy comparable to the dense baseline. CK pruning performs even better than the baseline.
4.2 RESNET50 ON IMAGENET
Our ResNet50 v1.5 experiments (Table 1 and Appendix Figure 11) with the first epoch of pruning at epoch 30 show that all of our pruning methods are able to achieve over 73% accuracy, and we can achieve above 74% accuracy up to 70% sparsity. Table 2 shows that on ResNet50 v1, our methods can achieve between 0.1-0.3% less than the baseline.
By comparing the sparsity curves of the window, CK, and intra-epoch pruning runs in Figure 4 (top right), we observe that the sparsity of window pruning is not as smooth as the other methods. This is likely indicative of the more rigid structure of CK and intra-epoch pruning, which causes the degree of sparsity to be much more uniform from epoch to epoch.
Figure 4 (top left, bottom right) also shows on ResNetv1.5, the window is slightly better than the CK and intra, which have similar performance, but the window is worse than the other two on
1https://pytorch.org/tutorials/beginner/fgsm_tutorial.html
ResNetv1. Furthermore, starting the pruning era later improves performance (Figure 4-(bottom left)).
Table 3 demonstrates that our sparsity mechanism can have a minimal drop in adversarial robustness (approximately 1-1.5%) compared to the dense baseline model, whereas other methods see more accuracy degradation (Wang et al., 2018).
The sparsity of each layer, depicted in Figure 5, emphasizes that early layers tolerate sparsity better, as they have consistently higher sparsity in the last 1×1 convolutional layer of each residual block. This may be due to their vicinity to the residual connection, which provides additional information to the layer.
4.3 DISCUSSION
Overall, we notice that there is a tolerance for sparsity (up to 70%), which yields around 1% accuracy loss compared to the dense baseline. However, this loss can be compensated by dropping the learning rate and performing another 10 epochs of training, which provides a 0.7-0.9% accuracy increase. With high levels of sparsity this extension is computationally cheap.
We observed the early stages of dense training are important for high accuracy, as longer periods of dense training consistently outperformed shorter ones. Moreover, widening the pruning era slightly (10 epochs) improves the final convergence accuracy (by around 0.2%).
We also observed that pushing the learning rate drop schedule to earlier epochs or aligning it with pruning era does not improve the final accuracy. However, pushing the last learning rate drop from epoch 90 to 80 can improve the accuracy by around 0.1%. (See Appendix Table 8 and Table 1)
We postulate that window pruning performs worse for ResNetv1.5 compared to ResNetv1 due to the strided nature of convolutions in ResNetv1.5.
5 RELATED WORK
To give a broad comparison stage, we extended Mostafa & Wang (2019)’s table on alternative sparsity mechanisms in Table 4 with respect to characteristics of their mechanisms: training/compression focus, regularization, the period in which pruning is applied, strictness of parameter budget, and pruning granularity. We explain each of the columns below:
1. Training Focus: Trying to train while maintaining/increasing sparsity of the network. The opposite is Compression Focus, i.e., methods that only seek to provide a smaller network for inference.
2. Regularization: Applying a regularization value to the loss, in order to find and prune irrelevant weights, while others use magnitude-based pruning.
3. Pruning Era: The period during training in which the pruning is applied.
4. Strictness of Parameter Budget Era wrt to Pruning: A strict parameter budget is fixed to the size of the final sparse model. Mostafa & Wang (2019) have a strict budget throughout training. Our method is only strict after the pruning era. Some networks do not have a strict parameter budget and only prune weights that appear to be irrelevant and without a sparsity target.
5. Pruning Granularity: The level of granularity within in the network at which values are pruned. For example, at the kernel level, we determine which values to prune by examining only the weights in the kernel (Mao et al., 2017). See Figure 2 for more information.
We chose these concepts because their characteristics can enable faster and lower-energy training. A strict parameter allows the hardware mapping to plan for a fixed number of multiply-accumulate
operations. The granularity of the sparsity mechanism indicates how easy it is to adapt the mechanism to an existing hardware. The coarser the granularity, the more adaptable it is to existing hardware (Mao et al., 2017). Regularization, although useful in forcing the network to learn prunable weights, adds more irregularity to computation flow. The pruning era starting at the beginning of the training enables us to train with a compressed network.
Mao et al. (2017) explores pruning on a range of granularities including window, kernel, and filter, and their effect on accuracy. They also qualitatively and quantitatively show that coarsegrain pruning, like kernel- or filter-level sparsity, is more energy-efficient due to fewer memory references. Similarly, our work surveys sparsity at the window, kernel, and filter levels. We improve on Mao et al.’s work in two ways. First, we show higher top-5 accuracy at higher sparsity levels on a complex benchmark, ImageNet on ResNet50 (92.338% at 40% CK sparsity), and we also show high top-1 accuracy whereas Mao et al. only report top-5.
Prunetrain (Lym et al., 2019) explores a way to create sparse channels and even layers to speed up training with around a 1% drop in accuracy. However, this requires a shift in the training mechanism, including a regularization term that could effect how the mechanism scales to large and distributed settings and that must be computed throughout training. The resulting network is only around 50% sparse and the accuracy loss due to sparse training is high enough that a baseline network with same accuracy could result into same computational savings by just terminating training at much earlier stage/epoch.
In contrast to other pruning mechanisms, our proposed window, CK, and combined sparsity mechanisms have strict parameter budgets after the pruning era. The CK and combined schemes have channel-level and kernel-level pruning granularities.
6 CONCLUSION AND FUTURE WORK
In this work, we introduced techniques to train CNNs with structured sparsity and studied the tradeoffs associated with various implementation options. We demonstrated on ResNet50 with the full ImageNet dataset that the proposed sparse training method outperforms all related work and is comparable to a dense model in terms of convergence accuracy. We also observed that delaying the start of enforced, gradual pruning to at least epoch 20 was necessary to reach high convergence accuracy, highlighting the importance of the early epochs of dense training. Moreover, performing an additional 10 epochs of training provides substantial (around 1%) accuracy gains of the final model. In the future, we would like to study the tradeoffs of sparse training on low-precision networks.
7 APPENDIX
7.1 DETAILS OF PRUNING ALGORITHMS
Here we provide full descriptions of our other pruning methods and our general methodology sparse training.
Sparse Training Methodology Algorithm 2 shows how we modify normal training in order to train sparsely.
Algorithm 2 Pruning Algorithm
current_iter = 0 while training do
if current_iter > first epoch of pruning and current_iter < last epoch of pruning then mask = generate_sparsity_mask( θ, current_iter, sparsity threshold ) end if θpr uned = mask ⋂ θ ŷ = forward_pass( θpr uned , x ) θ = weight_update( y, ŷ, θpr uned ) current_iter = current_iter + 1
end while
Window Pruning Methodology Algorithm 3 shows how we prune with window sparsity.
Algorithm 3 Window Pruning Algorithm
generate_window_sparsity_mask(θl ayer , sparsity_threshold): for θ in θl ayer do
for all c in C do for all k in K do
cutoff_index = size(θc,k ) ∗ sparsity_threshold n = max(cutoff_index, size(θc,k ) − max_non_zero − 1) cutoff_value = nth largest value in θc,k for all i,j in R,S do
maski , j ,c,k = 1 if θi , j ,c,k > cutoff_value, else 0 end for
end for end for
end for
Combined Pruning Methodology To combine Window and CK pruning, we introduce intraepoch pruning. As shown by Algorithm 4, in a given epoch we first apply Window Pruning to each 3×3 convolutional layer at a fraction of the sparsity threshold for that epoch. Then, we prune the remaining fraction of the sparsity threshold with CK Pruning. The idea being that kernels that lose many of their parameters during window pruning can be fully pruned during the CK pruning phase. Our intuition is that first pruning parameters within a kernels guides the subsequent CK pruning towards the less important kernels. Thus, we pick out better kernels to prune. We also gain more structured sparsity but sacrifice the precision of window pruning.
Algorithm 4 Intra-Epoch Pruning Algorithm
generate_intra_epoch_sparsity_mask(θl ayer , sparsity_threshold): for θ in θl ayer do
window_mask = generate_ck_sparsity_mask(θ, sparsity_threshold) ck_mask = generate_ck_sparsity_mask(θ, sparsity_threshold) mask = window_mask and ck_mask
end for
For completeness, we also tried another method of combining called inter-epoch pruning, which involved splitting the pruning era into CK pruning and window pruning phases. However, from our initial experiments we determined that intra-epoch pruning, performed better (though it was more computationally expensive) than inter-epoch pruning. With inter-epoch pruning we were only able to achieve 74.1% top-1 accuracy with a first epoch of sparsity of 40 and a final sparsity of 40% on Resnet50v1.5 and Imagnet. The same setup trained with intra-epoch pruning achieved 74.9% accuracy. Thus, we pursued intra-epoch pruning as our method to combine the two sparsification methods.
7.2 ADDITIONAL DETAILS ON EXPERIMENTAL SETUP
This section goes into more detail on the exact details of the models and dataset combinations we sued for experimentation.
7.2.1 RESNET50 ON TINY-IMAGENET
For this training domain, we trained using the Tiny-imagenet dataset CS231N (2015) with resnet50 He et al. (2015). However, we changed the training mechanism in order to get validate our results. Each network we train for 40 epochs, with a batch size of 64. Additionally, we use the Adam optimizer to train with learning rate set to 0.001 and momentum set to 0.9. We also use weight decay set to 0.0001, and we anneal the learning rate to 0.0001 after 30 epochs of training in order to converge faster. We apply the same image transforms as on full Imagenet.
We chose this optimization method because we felt that it achieved a good overall accuracy at a baseline level and represents the results in Sun (2016) in their vanilla model. We do not use the same preprocessing or image transforms in the report Sun (2016). Moreover, we wanted a quick way to estimate how our method would perform on full Imagenet.
7.2.2 RESNET50 ON IMAGENET
Here, we train each network for 90 epochs with a reduced batch size of 128 instead of 256 because 256 would not fit on a GPU in addition to our pruning layers. We found that changing the batch size to 128 but retaining all other hyperparameters as specified in He et al. (2015) we were able to achieve the same benchmark 74.94% accuracy as the paper. We train for 90 epochs with SGD with momentum set to 0.9 and weight decay is 1×10−4. We set the initial learning rate to be 0.1 and then anneal the learning rate to 0.01 at epoch 30 and then finally to 0.001 at epoch 60.
For dataset transformations, we perform the same transformations as 2. This means that during training we perform a random sized crop to size 224x224, randomly flip the image horizontally, and normalize the image. The batches are shuffled during training. For validation, we resize the image to 256 and then center crop to size 224x224 and then normalize.
7.2.3 RESNET50V1.5 ON IMAGENET
We train our model for 90/100 epochs and use SGD with momentum (0.9) to optimize. The standard model says that learning rate should 0.1 for 256 batch size, but since that didn’t fit in our GPUs with our sparsity mechanism, we used batch size 64 and linearly scaled the learning rate to be 0.05. We set the learning rate decay such that we multiply by 0.1 after 30, 60, and 90 epochs. We have weight decay set to 1×10−4. Resnet1.5 is defined in detail here 3.
7.3 MISCELLANEOUS RESULTS
7.3.1 RESNET50 ON TINY-IMAGENET
Our models actually perform better than the baseline with the following configurations: window pruning with 60% sparsity as well as CK pruning with 20%, 60% and 80% sparsity. The number of
2https://github.com/pytorch/examples/tree/master/imagenet 3https://ngc.nvidia.com/catalog/model-scripts/nvidia:resnet_50_v1_5_
for_pytorch
epochs required to reach the converge to the final accuracies is the same for CK and earlier for window at 40% and 60% sparsity. | 1. What is the main contribution of the paper regarding convolutional neural networks?
2. How does the proposed strategy differ from previous works such as Narang et al. (2017) and Mao et al. (2017)?
3. Why does the reviewer question the originality of the paper's approach?
4. What concerns does the reviewer have regarding the methodology used in the paper?
5. Are there any inconsistencies or missing explanations in the paper's discussion of pruning strategies?
6. How would removing entire output channels affect the network's performance, according to the reviewer?
7. Why does the reviewer believe that the speedup benefits of the proposed method are not demonstrated in the experiments?
8. What additional information would the reviewer like to see included in the paper's experiments section? | Review | Review
This paper introduces a strategy to prune a convolutional neural network during training. To speed up training, the proposed method prunes the weights with the smallest magnitude during only a small number of epochs at the beginning of training, later on continuing training with a fixed sparsity pattern. Several granularity levels for convolutional and fully-connected layers are studied. Furthermore, the robustness of the resulting pruned networks to adversarial attacks is investigated.
Originality:
- As acknowledged at the beginning of section two, the general pruning strategy used here is very similar to that introduced by Narang et al., 2017. While the authors argued that the threshold is computed in a different manner, it also increases gradually during training, as in Narang et al., 2017.
- I acknowledge that Narang et al., 2017 focuses on RNNs, while here the focus is on CNNs. However, the originality of the different pruning strategies used here for convolutional and fully-connected layers is very limited. In essence, these strategies directly follow those studied by Mao et al., 2017.
- The study of robustness to adversarial attacks, while interesting, is also not novel per se, as the idea of performing such a study was proposed in Wang et al., 2018. I acknowledge that the conclusions drawn here differ from those in Wang et al., 2018. However, there are no explanations for this different behavior.
Methodology:
- While the beginning of Section 2 states that the pruning threshold gradually increases during training, the specific way this is achieved is not clearly explained.
- The pruning strategies depicted by Fig. 2, whether for convolutional layers or for fully-connected ones, never aim to remove entire output channels. However, the only way to truly end up with a smaller network is to remove entire channels and/or layers, as argued in Wen et al., 2016 and in Alvarez & Salzmann, NIPS 2016, as well as studied in Mao et al., 2017 via the filter-level granularity. It is unclear to me how speed would be affected by having a network with the same number of channels and layers, but many parameters set to zero.
Experiments:
- The experiments show the good behavior of the proposed algorithm in terms of sparsity vs accuracy tradeoff. However, while the introduction seems to focus on the benefits of the proposed method in terms of training speed, these benefits are not demonstrated in the experiments, where no timings (neither for training not for inference) are reported.
- As mentioned above, it is not clear to me that the speedup will be significant if the sparsity pattern does not remove entire channels, but I am willing to be proven wrong.
Summary:
My main concern about this paper is its novelty, as the method essentially uses the method of Narang et al., 2017, albeit with a different threshold, with the sparsity patterns of Mao et al., 2017. The experiments demonstrate that the method is effective at pruning, but do not provide any timings to evaluate the resulting speedups. |
ICLR | Title
Starfire: Regularization-Free Adversarially-Robust Structured Sparse Training
Abstract
This paper studies structured sparse training of CNNs with a gradual pruning technique that leads to fixed, sparse weight matrices after a set number of epochs. We simplify the structure of the enforced sparsity so that it reduces overhead caused by regularization. The proposed training methodology explores several options for structured sparsity. We study various tradeoffs with respect to pruning duration, learning-rate configuration, and the total length of training. We show that our method creates a sparse version of ResNet50 and ResNet50v1.5 on full ImageNet while remaining within a negligible <1% margin of accuracy loss. To ensure that this type of sparse training does not harm the robustness of the network, we also demonstrate how the network behaves in the presence of adversarial attacks. Our results show that with 70% target sparsity, over 75% top-1 accuracy is achievable.
N/A
This paper studies structured sparse training of CNNs with a gradual pruning technique that leads to fixed, sparse weight matrices after a set number of epochs. We simplify the structure of the enforced sparsity so that it reduces overhead caused by regularization. The proposed training methodology explores several options for structured sparsity.
We study various tradeoffs with respect to pruning duration, learning-rate configuration, and the total length of training. We show that our method creates a sparse version of ResNet50 and ResNet50v1.5 on full ImageNet while remaining within a negligible <1% margin of accuracy loss. To ensure that this type of sparse training does not harm the robustness of the network, we also demonstrate how the network behaves in the presence of adversarial attacks. Our results show that with 70% target sparsity, over 75% top-1 accuracy is achievable.
1 INTRODUCTION
Pruning weights can compress a network into a smaller model so that the model can fit into faster/smaller memory and therefore result in execution speedups (Han et al., 2016; 2015a). To increase the accuracy Han et al. (2015b) and Mao et al. (2017) explore training the network dense after pruning. The resulting network can maintain accuracy based on the specified level of sparsity (Mostafa & Wang, 2019; Zhu & Gupta, 2017; Han et al., 2015a).
Structured sparsity has been explored for RNNs and also CNNs where a certain number of nonzeros is allowed across various cross-sections of the weight tensors. These methods aim to speed up computation and reach some final level of sparsity for deployment. Narang et al. (2017) have shown promising results for structured training of RNNs while sparse CNNs could not achieve the same performance (Mao et al., 2017).
Recent work has demonstrated that structurally sparse training can speed up execution on GPUs (He et al., 2017; Lym et al., 2019; Zhu & Gupta, 2017). However, these training mechanisms add regularization and computational overhead to eliminate unnecessary weights. The regularization term modifies the original training and can be expensive in hardware. While enforcing coarse-grain sparsity Lym et al. (2019) provides significant speedups, the final network contains an insufficient degree of sparsity for deployment on edge devices.
Mostafa & Wang (2019) show that with adaptive sparse training and dynamic reallocation of nonzeros sparsity levels up to 80% can be achieved. However, even though an additional 10 epochs of training are required, an accuracy loss of around 1-2% is still observed. The main drawback is the overhead incurred while implementing such technique on the target platform. Continuous reconfiguration of the sparsity pattern is expensive as it does not allow for compression of weights during training.
To achieve speedups and a desired final degree of sparsity, we aim to apply the techniques in Han et al. (2015b) and Mao et al. (2017) at earlier stages in training at higher frequency within a period which we call the pruning era, usually a period of 20-30 epochs. During the pruning era, with fine granularity of at most a kernel size, we exploit one of the three proposed sparsity regimes. Subsequently, we fix the mask for the rest of the training to speed it up. Our motivation came from the insight that having a fixed sparse multiply-accumulate pattern allows weight compression during training and can save compute and energy in hardware (Han et al., 2016).
We explore the impact of various pruning granularities, sparsity levels, and learning-rate schedules on the network’s convergence as well as adversarial robustness for CNNs like Resnet-50 (He et al., 2015) on ImageNet and tinyImagenet (CS231N, 2015).
Recent literature has shown that adversarial attacks are more successful on pruned neural networks than they are on regular neural networks (Wang et al., 2018). Given the danger of adversarial attacks in real world situations, we find that it is important to evaluate our sparsity techniques under adversarial robustness. We leverage the FGSM mechanism (Goodfellow et al., 2014) to evaluate the adversarial robustness on our sparse models. This paper makes the following contributions:
1. We propose a mechanism to train and prune a convolutional network during the earlier stages of training such that this sparsity can be harvested for the computational speedups. To do this, we fix the sparse weight masks for the remainder of the training.
2. For fully connected sparsification, we eliminate blocks of fully connected weights based on their connection to the zeros in the previous convolutional layer.
3. We enforce structural, regularization free, magnitude-based pruning across two distinct dimensions and a combined version. These dimensions are inside convolution window R ×S and across input/output feature matrix (C K ).
4. Our sparse models are as robust to adversarial FGSM attacks as fully dense models.
5. We demonstrate that early stage dense training is crucial for maintaining high accuracy.
6. The proposed technique is tolerant to sparsity levels of up to 60-70% with under 1% accuracy degradation. We can compensate by scheduling an extra learning rate drop and training for an extra 10 epochs.
The rest of the paper is organized as follows. Section 2 explains our pruning methodology. Section 3 describes the experimental setup framework. Section 4 presents results and discusses their interpretation. Section 5 presents the related work. Section 6 concludes the paper.
2 PRUNING METHODOLOGY
Our proposed pruning mechanism works by always pruning the weights of smallest magnitude after each weight update. After a forward and backward pass (one batch update), the model is pruned. If a weight is already zero, the gradient is also set to zero. This means that once a weight becomes zero, it will remain zero for the rest of the training period.
This mechanism is similar to Han et al. (2015b), except that we only prune in the earlier stages of the training as opposed to post training. Additionally, this work is similar to Narang et al. (2017) although we set the sparsity threshold instead of using a heuristic to calculate it. We chose this pruning mechanism because of its negligible computational overhead.
In our pruning algorithm, the sparsity threshold refers to the percentage of weights in the network that are currently pruned. Before or during the first epoch of pruning, we will have a sparsity threshold of zero. As we continue training, we gradually increase the sparsity threshold so that by the final epoch of pruning the network sparsity will have reached our final, desired threshold. Finally, we also define the pruning era to be the epochs between the first and final epochs of pruning depicted in Figure 1b.
We evaluate the pruning mask after every training step until we reach the final epoch of pruning. After the final epoch, the pruned values in the network will remain zero for the rest of training; no new pruning will occur, and only the non-zero weights will be updated.
2.1 PRUNING METHODOLOGY BY LAYER
Pruning the smallest magnitude weights in the entire network is inefficient because it involves sorting the weights over the network. Instead, we prune the smallest magnitude weights or sum of weights, within a certain locale of the network. When pruning, we examine each layer individually and apply a separate technique to evaluate which weights to prune, depending on the type of layer we are currently pruning.
2.1.1 CONVOLUTIONAL LAYER PRUNING
Window pruning for 3x3 Convolutional Layers Figure 2a shows the result of a pruned 3×3 convolutional weight tensor under the window pruning method. In this scheme, window layer pruning refers to pruning of weights within the 3×3 convolution kernels. We allow a maximum number of non-zero values for each kernel in the 3×3 convolutional layers and eliminate the weights of smallest magnitude.
Algorithm 1 CK Pruning Algorithm
generate_ck_sparsity_mask(θl ayer , sparsity_threshold): for θ in θl ayer do
for all c in C do for all k in K do
kernel_maxc,k = max(θc,k ) end for cutoff_index = size(θc ) ∗ sparsity_threshold n = max(cutoff_index, size(θc ) − max_non_zero − 1) cutoff_value = nth largest value in kernel_maxc for all k in K do
maskc,k = 1 if kernel_maxc,k > cutoff_value, else 0 end for
end for end for
CK Pruning Methodology Figure 2b shows the result of a pruned 3×3 convolutional weight tensor under the CK pruning method. In this scheme, the weights of a certain layer can be viewed
as a CK matrix of R×S kernels. The CK pruning method involves pruning the 3×3 convolutions along the channel and kernel dimensions of each convolutional filter, i.e., we prune whole kernels (CK matrix of R×S windows) at once and can ultimately prune all the input channels in an output channel. As defined by Algorithm 1, we determine which filter to prune by examining the max of the magnitudes of all the weights in a kernel, which is the max of nine weights. This max is used to evaluate whether the whole kernel should be pruned or not.
Combined Pruning Methodology To combine window and CK pruning, we introduce an intraepoch combined pruning method, which we refer to hereafter as “intra-epoch pruning” or “intra”, for short. As shown by appendix Algorithm 4 in the Appendix, in a given epoch we first apply window pruning to each 3×3 convolutional layer at a fraction of the sparsity threshold for that epoch. Then, we prune the remaining fraction of the sparsity threshold with CK Pruning.
2.1.2 FULLY CONNECTED PRUNING
Like pruning for convolutional layers, we apply a two-tier pruning scheme from Mao et al. (2017) for fully connected layers: micro-level pruning within a block and macro-level pruning that eliminates entire blocks.
Block FC Pruning Figure 2d refers to pruning of individual blocks. Here, we prune an entire n×n (n<5) window within the dense layer and create coarse grained sparsity. To do this, we sum the magnitude of the weights in each window and prune the windows with the smallest magnitude.
Fine FC Pruning Figure 2c refers to the pruning of individual weights. Here, we prune the individual weights in the entire FC Layer, where we compare the magnitude of all the weights to each other.
The produced zero patterns in the last convolution layer allow for eliminating more weights in fully connected layer as depicted in Figure 3. If all the C windows for a specific Ki are zeros, the output activation for the corresponding Ki is also zero. The corresponding neurons in the following fully connected layer are therefore receiving zero input activations and can be eliminated along with their associated weights. This enables us to get sparsity without having to evaluate the weights in the fully connected layer.
When pruning just the small weights in the FC layer, one can inadvertently cut off relevant connections between the input and output layers. Accordingly, we structure the pruning mechanism such that each output neuron should be influenced by the input. This means every column in the weight matrix of the fully connected layer in Figure 3 has at least one non-zero element.
3 EXPERIMENTAL SETUP
To validate each type of pruning (window, CK, or intra-epoch) we selected ResNet50 (He et al., 2015) v1 and v1.5 with the ImageNet and/or Tiny-ImageNet (CS231N, 2015) datasets. We evaluated each pruning method by varying sparsity levels and pruning era.
We experimented with ResNet50 v1.5, in addition to v1, to explore how changing the network structure would affect the top-1 accuracy. For window pruning, we tested with ResNet50v1 on Tiny-ImageNet as well as ResNet50v1 and v1.5 on ImageNet to compare the impact of strided convolutions on our sparse training. Also, we experimented with the learning rate schedule of the training regime. Our typical schedule for ResNet50v1.5 included learning rate drops at epochs 30,
60, and 90, but we experimented with placing the last drop at epoch 80 instead. Unlike typical ResNet50 training, which uses a batch size of 256 and starts the learning rate at 0.1, we used batch size 64 as this is what could fit in our GPUs. As suggested by Krizhevsky (2014), we scaled the starting learning rate by 1p
4 = 12 to 0.05 in order to compensate for the smaller batch size.
3.1 SPARSE TRAINING EXPERIMENTS
For ResNet50v1 and Tiny-ImageNet, we did gradual pruning until epoch 10. We subsequently enforced the final sparsity requirement, set a maximum number of non-zero values in each window/kernel of each layer, and fixed this sparsity pattern for the rest of training. We chose the 10th epoch as the final epoch of pruning because we wanted to see if we could fix the sparsity mask early in the training process.
For ResNet50v1 and ImageNet, our goal was to start pruning as early as possible, while maintaining high accuracy. We set our pruning era to epochs 0-30. Our hypothesis was that 30th epoch would be a suitable epoch to stop pruning, because this is where the learning rate is first decreased, in addition, there would be a large drop in accuracy if we stop pruning at epoch 20. However, this schedule did not perform well for ResNet50v1.5 and ImageNet and therefore we set our pruning era to 30-50.
To test training using CK and intra-epoch pruning, we mainly used ResNet50v1 and ResNet50v1.5 with ImageNet, but also performed CK pruning on ResNet50v1 and Tiny-ImageNet. We adopted a similar approach to Han et al. (2015b) to train with CK or intra-epoch pruning by setting the first epoch of pruning to 20, 30, or 40 with a pruning era of 20 or 30 epochs. Then, we continued to train the sparsified network until the final epoch.
3.2 ADVERSARIAL ROBUSTNESS
Since there was evidence that increasing sparsity lowers adversarial robustness (Wang et al., 2018), we evaluated this robustness in our models. To do so, we applied Fast Gradient Sign Method (FGSM) attacks defined in Goodfellow et al. (2014) on one of our sparse models, to generate its own adversarial examples, and measured the validation accuracy again. We used the same validation set as ImageNet and applied the attack’s image transformation to each input image. Moreover, we experimented with a variety of different ² in order to see how our accuracy decayed. Lastly, in our experiments we leveraged the examples provided in Pytorch tutorials 1.
4 RESULTS
4.1 RESNET50 ON TINY-IMAGENET
From our experiments with Tiny-Imagenet (shown in Appendix in Table 5), we see that even with up to 80% sparsity, both window and CK pruning are able to achieve levels of accuracy comparable to the dense baseline. CK pruning performs even better than the baseline.
4.2 RESNET50 ON IMAGENET
Our ResNet50 v1.5 experiments (Table 1 and Appendix Figure 11) with the first epoch of pruning at epoch 30 show that all of our pruning methods are able to achieve over 73% accuracy, and we can achieve above 74% accuracy up to 70% sparsity. Table 2 shows that on ResNet50 v1, our methods can achieve between 0.1-0.3% less than the baseline.
By comparing the sparsity curves of the window, CK, and intra-epoch pruning runs in Figure 4 (top right), we observe that the sparsity of window pruning is not as smooth as the other methods. This is likely indicative of the more rigid structure of CK and intra-epoch pruning, which causes the degree of sparsity to be much more uniform from epoch to epoch.
Figure 4 (top left, bottom right) also shows on ResNetv1.5, the window is slightly better than the CK and intra, which have similar performance, but the window is worse than the other two on
1https://pytorch.org/tutorials/beginner/fgsm_tutorial.html
ResNetv1. Furthermore, starting the pruning era later improves performance (Figure 4-(bottom left)).
Table 3 demonstrates that our sparsity mechanism can have a minimal drop in adversarial robustness (approximately 1-1.5%) compared to the dense baseline model, whereas other methods see more accuracy degradation (Wang et al., 2018).
The sparsity of each layer, depicted in Figure 5, emphasizes that early layers tolerate sparsity better, as they have consistently higher sparsity in the last 1×1 convolutional layer of each residual block. This may be due to their vicinity to the residual connection, which provides additional information to the layer.
4.3 DISCUSSION
Overall, we notice that there is a tolerance for sparsity (up to 70%), which yields around 1% accuracy loss compared to the dense baseline. However, this loss can be compensated by dropping the learning rate and performing another 10 epochs of training, which provides a 0.7-0.9% accuracy increase. With high levels of sparsity this extension is computationally cheap.
We observed the early stages of dense training are important for high accuracy, as longer periods of dense training consistently outperformed shorter ones. Moreover, widening the pruning era slightly (10 epochs) improves the final convergence accuracy (by around 0.2%).
We also observed that pushing the learning rate drop schedule to earlier epochs or aligning it with pruning era does not improve the final accuracy. However, pushing the last learning rate drop from epoch 90 to 80 can improve the accuracy by around 0.1%. (See Appendix Table 8 and Table 1)
We postulate that window pruning performs worse for ResNetv1.5 compared to ResNetv1 due to the strided nature of convolutions in ResNetv1.5.
5 RELATED WORK
To give a broad comparison stage, we extended Mostafa & Wang (2019)’s table on alternative sparsity mechanisms in Table 4 with respect to characteristics of their mechanisms: training/compression focus, regularization, the period in which pruning is applied, strictness of parameter budget, and pruning granularity. We explain each of the columns below:
1. Training Focus: Trying to train while maintaining/increasing sparsity of the network. The opposite is Compression Focus, i.e., methods that only seek to provide a smaller network for inference.
2. Regularization: Applying a regularization value to the loss, in order to find and prune irrelevant weights, while others use magnitude-based pruning.
3. Pruning Era: The period during training in which the pruning is applied.
4. Strictness of Parameter Budget Era wrt to Pruning: A strict parameter budget is fixed to the size of the final sparse model. Mostafa & Wang (2019) have a strict budget throughout training. Our method is only strict after the pruning era. Some networks do not have a strict parameter budget and only prune weights that appear to be irrelevant and without a sparsity target.
5. Pruning Granularity: The level of granularity within in the network at which values are pruned. For example, at the kernel level, we determine which values to prune by examining only the weights in the kernel (Mao et al., 2017). See Figure 2 for more information.
We chose these concepts because their characteristics can enable faster and lower-energy training. A strict parameter allows the hardware mapping to plan for a fixed number of multiply-accumulate
operations. The granularity of the sparsity mechanism indicates how easy it is to adapt the mechanism to an existing hardware. The coarser the granularity, the more adaptable it is to existing hardware (Mao et al., 2017). Regularization, although useful in forcing the network to learn prunable weights, adds more irregularity to computation flow. The pruning era starting at the beginning of the training enables us to train with a compressed network.
Mao et al. (2017) explores pruning on a range of granularities including window, kernel, and filter, and their effect on accuracy. They also qualitatively and quantitatively show that coarsegrain pruning, like kernel- or filter-level sparsity, is more energy-efficient due to fewer memory references. Similarly, our work surveys sparsity at the window, kernel, and filter levels. We improve on Mao et al.’s work in two ways. First, we show higher top-5 accuracy at higher sparsity levels on a complex benchmark, ImageNet on ResNet50 (92.338% at 40% CK sparsity), and we also show high top-1 accuracy whereas Mao et al. only report top-5.
Prunetrain (Lym et al., 2019) explores a way to create sparse channels and even layers to speed up training with around a 1% drop in accuracy. However, this requires a shift in the training mechanism, including a regularization term that could effect how the mechanism scales to large and distributed settings and that must be computed throughout training. The resulting network is only around 50% sparse and the accuracy loss due to sparse training is high enough that a baseline network with same accuracy could result into same computational savings by just terminating training at much earlier stage/epoch.
In contrast to other pruning mechanisms, our proposed window, CK, and combined sparsity mechanisms have strict parameter budgets after the pruning era. The CK and combined schemes have channel-level and kernel-level pruning granularities.
6 CONCLUSION AND FUTURE WORK
In this work, we introduced techniques to train CNNs with structured sparsity and studied the tradeoffs associated with various implementation options. We demonstrated on ResNet50 with the full ImageNet dataset that the proposed sparse training method outperforms all related work and is comparable to a dense model in terms of convergence accuracy. We also observed that delaying the start of enforced, gradual pruning to at least epoch 20 was necessary to reach high convergence accuracy, highlighting the importance of the early epochs of dense training. Moreover, performing an additional 10 epochs of training provides substantial (around 1%) accuracy gains of the final model. In the future, we would like to study the tradeoffs of sparse training on low-precision networks.
7 APPENDIX
7.1 DETAILS OF PRUNING ALGORITHMS
Here we provide full descriptions of our other pruning methods and our general methodology sparse training.
Sparse Training Methodology Algorithm 2 shows how we modify normal training in order to train sparsely.
Algorithm 2 Pruning Algorithm
current_iter = 0 while training do
if current_iter > first epoch of pruning and current_iter < last epoch of pruning then mask = generate_sparsity_mask( θ, current_iter, sparsity threshold ) end if θpr uned = mask ⋂ θ ŷ = forward_pass( θpr uned , x ) θ = weight_update( y, ŷ, θpr uned ) current_iter = current_iter + 1
end while
Window Pruning Methodology Algorithm 3 shows how we prune with window sparsity.
Algorithm 3 Window Pruning Algorithm
generate_window_sparsity_mask(θl ayer , sparsity_threshold): for θ in θl ayer do
for all c in C do for all k in K do
cutoff_index = size(θc,k ) ∗ sparsity_threshold n = max(cutoff_index, size(θc,k ) − max_non_zero − 1) cutoff_value = nth largest value in θc,k for all i,j in R,S do
maski , j ,c,k = 1 if θi , j ,c,k > cutoff_value, else 0 end for
end for end for
end for
Combined Pruning Methodology To combine Window and CK pruning, we introduce intraepoch pruning. As shown by Algorithm 4, in a given epoch we first apply Window Pruning to each 3×3 convolutional layer at a fraction of the sparsity threshold for that epoch. Then, we prune the remaining fraction of the sparsity threshold with CK Pruning. The idea being that kernels that lose many of their parameters during window pruning can be fully pruned during the CK pruning phase. Our intuition is that first pruning parameters within a kernels guides the subsequent CK pruning towards the less important kernels. Thus, we pick out better kernels to prune. We also gain more structured sparsity but sacrifice the precision of window pruning.
Algorithm 4 Intra-Epoch Pruning Algorithm
generate_intra_epoch_sparsity_mask(θl ayer , sparsity_threshold): for θ in θl ayer do
window_mask = generate_ck_sparsity_mask(θ, sparsity_threshold) ck_mask = generate_ck_sparsity_mask(θ, sparsity_threshold) mask = window_mask and ck_mask
end for
For completeness, we also tried another method of combining called inter-epoch pruning, which involved splitting the pruning era into CK pruning and window pruning phases. However, from our initial experiments we determined that intra-epoch pruning, performed better (though it was more computationally expensive) than inter-epoch pruning. With inter-epoch pruning we were only able to achieve 74.1% top-1 accuracy with a first epoch of sparsity of 40 and a final sparsity of 40% on Resnet50v1.5 and Imagnet. The same setup trained with intra-epoch pruning achieved 74.9% accuracy. Thus, we pursued intra-epoch pruning as our method to combine the two sparsification methods.
7.2 ADDITIONAL DETAILS ON EXPERIMENTAL SETUP
This section goes into more detail on the exact details of the models and dataset combinations we sued for experimentation.
7.2.1 RESNET50 ON TINY-IMAGENET
For this training domain, we trained using the Tiny-imagenet dataset CS231N (2015) with resnet50 He et al. (2015). However, we changed the training mechanism in order to get validate our results. Each network we train for 40 epochs, with a batch size of 64. Additionally, we use the Adam optimizer to train with learning rate set to 0.001 and momentum set to 0.9. We also use weight decay set to 0.0001, and we anneal the learning rate to 0.0001 after 30 epochs of training in order to converge faster. We apply the same image transforms as on full Imagenet.
We chose this optimization method because we felt that it achieved a good overall accuracy at a baseline level and represents the results in Sun (2016) in their vanilla model. We do not use the same preprocessing or image transforms in the report Sun (2016). Moreover, we wanted a quick way to estimate how our method would perform on full Imagenet.
7.2.2 RESNET50 ON IMAGENET
Here, we train each network for 90 epochs with a reduced batch size of 128 instead of 256 because 256 would not fit on a GPU in addition to our pruning layers. We found that changing the batch size to 128 but retaining all other hyperparameters as specified in He et al. (2015) we were able to achieve the same benchmark 74.94% accuracy as the paper. We train for 90 epochs with SGD with momentum set to 0.9 and weight decay is 1×10−4. We set the initial learning rate to be 0.1 and then anneal the learning rate to 0.01 at epoch 30 and then finally to 0.001 at epoch 60.
For dataset transformations, we perform the same transformations as 2. This means that during training we perform a random sized crop to size 224x224, randomly flip the image horizontally, and normalize the image. The batches are shuffled during training. For validation, we resize the image to 256 and then center crop to size 224x224 and then normalize.
7.2.3 RESNET50V1.5 ON IMAGENET
We train our model for 90/100 epochs and use SGD with momentum (0.9) to optimize. The standard model says that learning rate should 0.1 for 256 batch size, but since that didn’t fit in our GPUs with our sparsity mechanism, we used batch size 64 and linearly scaled the learning rate to be 0.05. We set the learning rate decay such that we multiply by 0.1 after 30, 60, and 90 epochs. We have weight decay set to 1×10−4. Resnet1.5 is defined in detail here 3.
7.3 MISCELLANEOUS RESULTS
7.3.1 RESNET50 ON TINY-IMAGENET
Our models actually perform better than the baseline with the following configurations: window pruning with 60% sparsity as well as CK pruning with 20%, 60% and 80% sparsity. The number of
2https://github.com/pytorch/examples/tree/master/imagenet 3https://ngc.nvidia.com/catalog/model-scripts/nvidia:resnet_50_v1_5_
for_pytorch
epochs required to reach the converge to the final accuracies is the same for CK and earlier for window at 40% and 60% sparsity. | 1. How effective is the proposed training method on architectures other than ResNets?
2. What happens if the "pruning era" is made longer, started substantially earlier, or started substantially later?
3. How robust is the ordering of the methods when the experiment is re-ran with different random seeds, etc.?
4. Can you provide a clearer name for the "combined" method instead of "intra-epoch pruning"?
5. Can you specify what GPUs were used in the experimental setup?
6. Can you discuss how predictive results on Tiny-ImageNet are for results on ImageNet?
7. Can you add context to Table 2 by comparing to prior work with sparsity level 60% and some of the compression-focused methods from Table 4?
8. Can you clarify that Mao et al. (2017) also work with ResNet models on ImageNet in the comparison? | Review | Review
The paper investigates methods to train neural networks so the final network has sparse weights, both in convolutional layers and in fully connected layers. In particular, the paper focuses on modifying the training so that the network is first trained without sparsification for a certain number of epochs, then trained to be increasingly sparse, and then fine-tuned with a fixed sparsity pattern at the end.
While I find the overall approach of the paper interesting, currently the experiments are not systematic enough to derive clear insights from the paper. Hence I unfortunately recommend rejecting the paper at this point. I hope the authors find time to conduct more systematic experiments for a future version of the paper.
Concretely, the following would be interesting experiments / questions:
- How effective is the proposed training method on architectures other than ResNets?
- What happens if the "pruning era" is made longer, started substantially earlier, or started substantially later? Currently it is not clear if the epoch 30 - 50 pruning era is (approximately) optimal and how much performance varies with begin and end of the pruning era.
- Due to the small variation between some of the methods, it would be good to investigate how robust the ordering is when the experiment is re-ran with different random seeds etc.
In addition, I have the following suggestions:
- The authors may want to remove or enhance the adversarial robustness evaluation. Currently the authors only evaluate robustness against FGSM, but it is well known that iterative attacks such as PGD are more effective.
- Instead of "intra-epoch pruning" or "intra", the name "combined" may be more clear for the combined method.
- In the description of the experimental setup, it could be good to specify what GPUs were used (since this lead to the smaller batch size).
- It could be helpful for the reader to discuss how predictive results on Tiny-ImageNet are for results on ImageNet.
- In Table 2, it would be good to add context by comparing to prior work with sparsity level 60% and some of the compression-focused methods from Table 4.
- In the comparison to Mao et al. (2017), it could be good to clarify that they also work with ResNet models on ImageNet. |
ICLR | Title
Variational Autoencoder with Arbitrary Conditioning
Abstract
We propose a single neural probabilistic model based on variational autoencoder that can be conditioned on an arbitrary subset of observed features and then sample the remaining features in “one shot”. The features may be both real-valued and categorical. Training of the model is performed by stochastic variational Bayes. The experimental evaluation on synthetic data, as well as feature imputation and image inpainting problems, shows the effectiveness of the proposed approach and diversity of the generated samples.
1 INTRODUCTION
In past years, a number of generative probabilistic models based on neural networks have been proposed. The most popular approaches include variational autoencoder (Kingma & Welling, 2013) (VAE) and generative adversarial net (Goodfellow et al., 2014) (GANs). They learn a distribution over objects p(x) and allow sampling from this distribution.
In many cases, we are interested in learning a conditional distribution p(x|y). For instance, if x is an image of a face, y could be the characteristics describing the face (are glasses present or not; length of hair, etc.) Conditional variational autoencoder (Sohn et al., 2015) and conditional generative adversarial nets (Mirza & Osindero, 2014) are popular methods for this problem.
In this paper, we consider the problem of learning all conditional distributions of the form p(xI |xU\I), where U is the set of all features and I is its arbitrary subset. This problem generalizes both learning the joint distribution p(x) and learning the conditional distribution p(x|y). To tackle this problem, we propose a Variational Autoencoder with Arbitrary Conditioning (VAEAC) model. It is a latent variable model similar to VAE, but allows conditioning on an arbitrary subset of the features. The conditioning features affect the prior on the latent Gaussian variables which are used to generate unobserved features. The model is trained using stochastic gradient variational Bayes (Kingma & Welling, 2013).
We consider two most natural applications of the proposed model. The first one is feature imputation where the goal is to restore the missing features given the observed ones. The imputed values may be valuable by themselves or may improve the performance of other machine learning algorithms which process the dataset. Another application is image inpainting in which the goal is to fill in an unobserved part of an image with an artificial content in a realistic way. This can be used for removing unnecessary objects from the images or, vice versa, for complementing the partially closed or corrupted object.
∗Author is in DeepMind now.
The experimental evaluation shows that the proposed model successfully samples from the conditional distributions. The distribution over samples is close to the true conditional distribution. This property is very important when the true distribution has several modes. The model is shown to be effective in feature imputation problem which helps to increase the quality of subsequent discriminative models on different problems from UCI datasets collection (Lichman, 2013). We demonstrate that model can generate diverse and realistic image inpaintings on MNIST (LeCun et al., 1998), Omniglot (Lake et al., 2015) and CelebA (Liu et al., 2015) datasets, and works even better than the current state of the art inpainting techniques in terms of peak signal to noise ratio (PSNR).
The paper is organized as follows. In section 2 we review the related works. In section 3 we briefly describe variational autoencoders and conditional variational autoencoders. In section 4 we define the problem, describe the VAEAC model and its training procedure. In section 5 we evaluate VAEAC. Section 6 concludes the paper. Appendix contains additional explanations, theoretical analysis, and experiments for VAEAC.
2 RELATED WORK
Universal Marginalizer (Douglas et al., 2017) is a model based on a feed-forward neural network which approximates marginals of unobserved features conditioned on observable values. A related idea of an autoregressive model of joint probability was previously proposed in Germain et al. (2015) and Uria et al. (2016). The description of the model and comparison with VAEAC are available in section 5.3.
Yoon et al. (2018) propose a GANs-based model called GAIN which solves the same problem as VAEAC. In contrast to VAEAC, GAIN does not use unobserved data during training, which makes it easier to apply to the missing features imputation problem. Nevertheless, it turns into a disadvantage when the fully-observed training data is available but the missingness rate at the testing stage is high. For example, in inpainting setting GAIN cannot learn the conditional distribution over MNIST digits given one horizontal line of the image while VAEAC can (see appendix D.4). The comparison of VAEAC and GAIN on the missing feature imputation problem is given in section 5.1 and appendix D.2.
Rezende et al. (2014) [Appendix F], Sohl-Dickstein et al. (2015), Goyal et al. (2017), and Bordes et al. (2017) propose to fill missing data with noise and run Markov chain with a learned transition operator. The stationary distribution of such chains approximates the true conditional distribution of the unobserved features. Bachman & Precup (2015) consider missing feature imputation in terms of Markov decision process and propose LSTM-based sequential decision making model to solve it. Nevertheless, these methods are computationally expensive at the test time and require fully-observed training data.
Image inpainting is a classic computer vision problem. Most of the earlier methods rely on local and texture information or hand-crafted problem-specific features (Bertalmio et al., 2000). In past years multiple neural network based approaches have been proposed.
Pathak et al. (2016), Yeh et al. (2016) and Yang et al. (2017) use different kinds and combinations of adversarial, reconstruction, texture and other losses. Li et al. (2017) focuses on face inpainting and uses two adversarial losses and one semantic parsing loss to train the generative model. In Yeh et al. (2017) GANs are first trained on the whole training dataset. The inpainting is an optimization procedure that finds the latent variables that explain the observed features best. Then, the obtained latents are passed through the generative model to restore the unobserved portion of the image. We can say that VAEAC is a similar model which uses prior network to find a proper latents instead of solving the optimization problem.
All described methods aim to produce a single realistic inpainting, while VAEAC is capable of sampling diverse inpaintings. Additionally, Yeh et al. (2016), Yang et al. (2017) and Yeh et al. (2017) have high testtime computational complexity of inpainting, because they require an optimization problem to be solved. On the other hand, VAEAC is a “single-shot” method with a low computational cost.
3 BACKGROUND
3.1 VARIATIONAL AUTOENCODER
Variational autoencoder (Kingma & Welling, 2013) (VAE) is a directed generative model with latent variables. The generative process in variational autoencoder is as follows: first, a latent variable z is generated from the prior distribution p(z), and then the data x is generated from the generative distribution pθ(x|z), where θ are the generative model’s parameters. This process induces the distribution pθ(x) = Ep(z)pθ(x|z). The distribution pθ(x|z) is modeled by a neural network with parameters θ. p(z) is a standard Gaussian distribution.
The parameters θ are tuned by maximizing the likelihood of the training data points {xi}Ni=1 from the true data distribution pd(x). In general, this optimization problem is challenging due to intractable posterior inference. However, a variational lower bound can be optimized efficiently using backpropagation and stochastic gradient descent:
log pθ(x) = Eqφ(z|x) log pθ(x, z)
qφ(z|x) +DKL(qφ(z|x)‖p(z|x, θ))
≥ Eqφ(z|x) log pθ(x|z)−DKL(qφ(z|x)‖p(z)) = LV AE(x; θ, φ) (1)
Here qφ(z|x) is a proposal distribution parameterized by neural network with parameters φ that approximates the posterior p(z|x, θ). Usually this distribution is Gaussian with a diagonal covariance matrix. The closer qφ(z|x) to p(z|x, θ), the tighter variational lower bound LV AE(θ, φ). To compute the gradient of the variational lower bound with respect to φ, reparameterization trick is used: z = µφ(x) + εσφ(x) where ε ∼ N (0, I) and µφ and σφ are deterministic functions parameterized by neural networks. So the gradient can be estimated using Monte-Carlo method for the first term and computing the second term analytically:
∂LV AE(x; θ, φ)
∂φ = Eε∼N (0,I)
∂
∂φ log pθ(x|µφ(x) + εσφ(x))−
∂
∂φ DKL(qφ(z|x)‖p(z)) (2)
So LV AE(θ, φ) can be optimized using stochastic gradient ascent with respect to φ and θ.
3.2 CONDITIONAL VARIATIONAL AUTOENCODER
Conditional variational autoencoder (Sohn et al., 2015) (CVAE) approximates the conditional distribution pd(x|y). It outperforms deterministic models when the distribution pd(x|y) is multi-modal (diverse xs are probable for the given y). For example, assume that x is a real-valued image. Then, a deterministic regression model with mean squared error loss would predict the average blurry value for x. On the other hand, CVAE learns the distribution of x, from which one can sample diverse and realistic objects.
Variational lower bound for CVAE can be derived similarly to VAE by conditioning all considered distributions on y:
LCV AE(x, y; θ, ψ, φ) = Eqφ(z|x,y) log pθ(x|z, y)−DKL(qφ(z|x, y)‖pψ(z|y)) ≤ log pθ,ψ(x|y) (3)
Similarly to VAE, this objective is optimized using the reparameterization trick. Note that the prior distribution pψ(z|y) is conditioned on y and is modeled by a neural network with parameters ψ. Thus, CVAE uses three trainable neural networks, while VAE only uses two.
Also authors propose such modifications of CVAE as Gaussian stochastic neural network and hybrid model. These modifications can be applied to our model as well. Nevertheless, we don’t use them, because of their disadvantage which is described in appendix C.
4 VARIATIONAL AUTOENCODER WITH ARBITRARY CONDITIONING
4.1 PROBLEM STATEMENT
Consider a distribution pd(x) over a D-dimensional vector x with real or categorical components. The components of the vector are called features.
Let binary vector b ∈ {0, 1}D be the binary mask of unobserved features of the object. Then we describe the vector of unobserved features as xb = {xi:bi=1}. For example, x(0,1,1,0,1) = (x2, x3, x5). Using this notation we denote x1−b as a vector of observed features.
Our goal is to build a model of the conditional distribution pψ,θ(xb|x1−b, b) ≈ pd(xb|x1−b, b) for an arbitrary b, where ψ and θ are parameters that are used in our model at the testing stage.
However, the true distribution pd(xb|x1−b, b) is intractable without strong assumptions about pd(x). Therefore, our model pψ,θ(xb|x1−b, b) has to be more precise for some b and less precise for others. To formalize our requirements about the accuracy of our model we introduce the distribution p(b) over different unobserved feature masks. The distribution p(b) is arbitrary and may be defined by the user depending on the problem. Generally it should have full support over {0, 1}D so that pψ,θ(xb|x1−b, b) can evaluate arbitrary conditioning. Nevertheless, it is not necessary if the model is used for specific kinds of conditioning (as we do in section 5.2).
Using p(b) we can introduce the following log-likelihood objective function for the model:
max ψ,θ Epd(x)Ep(b) log pψ,θ(xb|x1−b, b) (4)
The special cases of the objective (4) are variational autoencoder (bi = 1 ∀i ∈ {1, . . . , D}) and conditional variational autoencoder (b is constant).
4.2 MODEL DESCRIPTION
The generative process of our model is similar to the generative process of CVAE: for each object firstly we generate z ∼ pψ(z|x1−b, b) using prior network, and then sample unobserved features xb ∼ pθ(xb|z, x1−b, b) using generative network. This process induces the following model distribution over unobserved features:
pψ,θ(xb|x1−b, b) = Ez∼pψ(z|x1−b,b)pθ(xb|z, x1−b, b) (5)
We use z ∈ Rd, and Gaussian distribution pψ over z, with parameters from a neural network with weights ψ: pψ(z|x1−b, b, ψ) = N (z|µψ(x1−b, b), σ2ψ(x1−b, b)I). The real-valued components of distribution pθ(xb|z, x1−b, b) are defined likewise. Each categorical component i of distribution pθ(xi|z, x1−b, b) is parameterized by a function wi,θ(z, x1−b, b), whose outputs are logits of probabilities for each category: xi ∼ Cat[Softmax(wi,θ(z, x1−b, b))]. Therefore the components of the latent vector z are conditionally independent given x1−b and b, and the components of xb are conditionally independent given z, x1−b and b.
The variables xb and x1−b have variable length that depends on b. So in order to use architectures such as multi-layer perceptron and convolutional neural network we consider x1−b = x ◦ (1 − b) where ◦ is an element-wise product. So in implementation x1−b has fixed length. The output of the generative network also has a fixed length, but we use only unobserved components to compute likelihood.
The theoretical analysis of the model is available in appendix B.1.
4.3 LEARNING VARIATIONAL AUTOENCODER WITH ARBITRARY CONDITIONING
4.3.1 VARIATIONAL LOWER BOUND
We can derive a lower bound for log pψ,θ(xb|x1−b, b) as for variational autoencoder:
log pψ,θ(xb|x1−b, b) = Eqφ(z|x,b) log pψ,θ(xb, z|x1−b, b)
qφ(z|x, b) +DKL(qφ(z|x, b)‖pψ,θ(z|x, b))
≥ Eqφ(z|x,b) log pθ(xb|z, x1−b, b)−DKL(qφ(z|x, b)‖pψ(z|x1−b, b)) = LV AEAC(x, b; θ, ψ, φ) (6)
Therefore we have the following variational lower bound optimization problem:
max θ,ψ,φ Epd(x)Ep(b)LV AEAC(x, b; θ, ψ, φ) (7)
We use fully-factorized Gaussian proposal distribution qφ which allows us to perform reparameterization trick and compute KL divergence analytically in order to optimize (7).
4.3.2 PRIOR IN LATENT SPACE
During the optimization of objective (7) the parameters µψ and σψ of the prior distribution of z may tend to infinity, since there is no penalty for large values of those parameters. We usually observe the growth of ‖z‖2 during training, though it is slow enough. To prevent potential numerical instabilities, we put a Normal-Gamma prior on the parameters of the prior distribution to prevent the divergence. Formally, we redefine pψ(z|x1−b, b) as follows:
pψ(z, µψ, σψ|x1−b, b) = N (z|µψ, σ2ψ)N (µψ|0, σµ)Gamma(σψ|2, σσ) (8)
As a result, the regularizers − µ 2 ψ
2σ2µ and σσ(log(σψ) − σψ) are added to the model log-likelihood. Hyperpa-
rameter σµ is chosen to be large (104) and σσ is taken to be a small positive number (10−4). This distribution is close to uniform near zero, so it doesn’t affect the learning process significantly.
4.3.3 MISSING FEATURES
The optimization objective (7) requires all features of each object at the training stage: some of the features will be observed variables at the input of the model and other will be unobserved features used to evaluate the model. Nevertheless, in some problem settings the training data contains missing features too. We propose the following slight modification of the problem (7) in order to cover such problems as well.
The missing values cannot be observed so xi = ω ⇒ bi = 1, where ω describes the missing value in the data. In order to meet this requirement, we redefine mask distribution as conditioned on x: p(b) turns into p(b|x) in (4) and (7). In the reconstruction loss (5) we simply omit the missing features, i. e. marginalize them out:
log pθ(xb|z, x1−b, b) = ∑
i:bi=1,xi 6=ω
log pθ(xi|z, x1−b, b) (9)
The proposal network must be able to determine which features came from real object and which are just missing. So we use additional missing features mask which is fed to proposal network together with unobserved features mask b and object x.
The proposed modifications are evaluated in section 5.1.
5 EXPERIMENTS
In this section we validate the performance of VAEAC using several real-world datasets. In the first set of experiments we evaluate VAEAC missing features imputation performance using various UCI datasets (Lichman, 2013). We compare imputations from our model with imputations from such classical methods as MICE (Buuren & Groothuis-Oudshoorn, 2010) and MissForest (Stekhoven & Bühlmann, 2011) and recently proposed GANs-based method GAIN (Yoon et al., 2018). In the second set of experiments we use VAEAC to solve image inpainting problem. We show inpainitngs generated by VAEAC and compare our model with models from papers Pathak et al. (2016), Yeh et al. (2017) and Li et al. (2017) in terms of peak signal-to-noise ratio (PSNR) of obtained inpaintings on CelebA dataset (Liu et al., 2015) . And finally, we evaluate VAEAC against the competing method called Universal Marginalizer (Douglas et al., 2017). Additional experiments can be found in appendices C and D. The code is available at https://github.com/tigvarts/ vaeac.
5.1 MISSING FEATURES IMPUTATION
The datasets with missing features are widespread. Consider a dataset with D-dimensional objects x where each feature may be missing (which we denote by xi = ω) and their target values y. The majority of discriminative methods do not support missing values in the objects. The procedure of filling in the missing features values is called missing features imputation.
In this section we evaluate the quality of imputations produced by VAEAC. For evaluation we use datasets from UCI repository (Lichman, 2013). Before training we drop randomly 50% of values both in train and test set. After that we impute missing features using MICE (Buuren & Groothuis-Oudshoorn, 2010), MissForest (Stekhoven & Bühlmann, 2011), GAIN (Yoon et al., 2018) and VAEAC trained on the observed data. The details of GAIN implementation are described in appendix A.4.
Our model learns the distribution of the imputations, so it is able to sample from this distribution. We replace each object with missing features by n = 10 objects with sampled imputations, so the size of the dataset increases by n times. This procedure is called missing features multiple imputation. MICE and GAIN are also capable of multiple imputation (we use n = 10 for them in experiments as well), but MissForest is not.
For more details about the experimental setup see appendices A.1, A.2, and A.4.
In table 1 we report NRMSE (i.e. RMSE normalized by the standard deviation of each feature and then averaged over all features) of imputations for continuous datasets and proportion of falsely classified (PFC) for categorical ones. For multiple imputation methods we average imputations of continuous variables and take most frequent imputation for categorical ones for each object.
We also learn linear or logistic regression and report the regression or classification performance after applying imputations of different methods in table 2. For multiple imputation methods we average predictions for continuous targets and take most frequent prediction for categorical ones for each object in test set.
As can be seen from the tables 1 and 2, VAEAC can learn joint data distribution and use it for missing feature imputation. The imputations are competitive with current state of the art imputation methods in terms of RMSE, PFC, post-imputation regression R2-score and classification accuracy. Nevertheless, we don’t claim that our method is state of the art in missing features imputation; for some datasets MICE or MissForest outperform it. The additional experiments can be found in appendix D.2.
5.2 IMAGE INPAINTING
The image inpainting problem has a number of different formulations. The formulation of our interest is as follows: some of the pixels of an image are unobserved and we want to restore them in a natural way. Unlike the majority of papers, we want to restore not just one most probable inpainting, but the distribution over all possible inpaintings from which we can sample. This distribution is extremely multi-modal because often there is a lot of different possible ways to inpaint the image.
Unlike the previous subsection, here we have uncorrupted images without missing features in the training set, so p(b|x) = p(b). As we show in section 2, state of the art results use different adversarial losses to achieve more sharp and realistic samples. VAEAC can be adapted to the image inpainting problem by using a combination of those adversarial losses as a part of reconstruction loss pθ(xb|z, x1−b, b). Nevertheless, such construction is out of scope for this research, so we leave it for the future work. In the current work we show that the model can generate both diverse and realistic inpaintings.
In figures 1, 2, 3 and 4 we visualize image inpaintings produced by VAEAC on binarized MNIST (LeCun et al., 1998), Omniglot (Lake et al., 2015) and CelebA (Liu et al., 2015). The details of learning procedure and description of datasets are available in appendixes A.1 and A.3.
To the best of our knowledge, the most modern inpainting papers don’t consider the diverse inpainting problem, where the goal is to build diverse image inpaintings, so there is no straightforward way to compare with these models. Nevertheless, we compute peak signal-to-noise ratio (PSNR) for one random inpainting from VAEAC and the best PSNR among 10 random inpaintings from VAEAC. One inpainting might not be similar to the original image, so we also measure how good the inpainting which is most similar to the original image reconstructs it. We compare these two metrics computed for certain masks with the PSNRs for the same masks on CelebA from papers Yeh et al. (2017) and Li et al. (2017). The results are available in tables 3 and 4.
We observe that for the majority of proposed masks our model outperforms the competing methods in terms of PSNR even with one sample, and for the rest (where the inpaintings are significantly diverse) the best PSNR over 10 inpaintings is larger than the same PSNR of the competing models. Even if PSNR does not reflect completely the visual quality of images and tends to encourage blurry VAE samples instead of realistic GANs samples, the results show that VAEAC is able to solve inpainting problem comparably to the state of the art methods. The disadvantage of VAEAC compared to Yeh et al. (2017) and Li et al. (2017) (but
not Pathak et al. (2016)) is that it needs the distribution over masks at the training stage to be similar to the distribution over them at the test stage. However, it is not a very strict limitation for the practical usage.
5.3 UNIVERSAL MARGINALIZER
Universal Marginalizer (Douglas et al., 2017) (UM) is a model which uses a single neural network to estimate the marginal distributions over the unobserved features. So it optimizes the following objective:
max θ Ex∼pd(x)Eb∼p(b) D∑ i=1 bi log pθ(xi|x1−b, b) (10)
For given mask b we fix a permutation of its unobserved components: (i1, i2, . . . , i|b|), where |b| is a number of unobserved components. Using the learned model and the permutation we can generate objects from joint distribution and estimate their probability using chain rule.
log pθ(xb|x1−b, b) = |b|∑ j=1 log pθ(xij |x1−(b−∑j−1k=1 eik ), b− j−1∑ k=1 eik) (11)
For example, pθ(x1, x4, x5|x2, x3) = pθ(x4|x2, x3)pθ(x1|x2, x3, x4)pθ(x5|x1, x2, x3, x4). Conditional sampling or conditional likelihood estimation for one object requires |b| requests to UM to compute pθ(xi|x1−b, b). Each request is a forward pass through the neural network. In the case of conditional sampling those requests even cannot be paralleled because the input of the next request contains the output of the previous one.
We propose a slight modification of the original UM training procedure which allows learning UM efficiently for any kind of masks including those considered in this paper. The details of the modification are described in appendix B.3.
1The results are from the paper (Yeh et al., 2017) 2The results are from the paper (Li et al., 2017)
Left: input. The gray pixels are unobserved. Middle: samples from VAEAC. Right: ground truth.
The results of using this modification of UM are provided in table 5. We can say that the relation between VAEAC and UM is similar to the relation between VAE and PixelCNN. The second one is much slower at the testing stage, but it easily takes into account local dependencies in data while the first one is faster but assumes conditional independence of the outputs. Nevertheless, there are a number of cases where UM cannot learn the distribution well while VAEAC can. For example, when the data is real-valued and marginal distributions have many local optima, there is no straightforward parametrization which allows UM to approximate them, and, therefore also the conditioned joint distribution. An example of such distribution and more illustrations for comparison of VAEAC and UM are available in appendix D.5.
6 CONCLUSION
In this paper we consider the problem of simultaneous learning of all conditional distributions for a vector. This problem has a number of different special cases with practical applications. We propose neural network based probabilistic model for distribution conditioning learning with Gaussian latent variables. This model is scalable and efficient in inference and learning. We propose several tricks to improve optimization and give recommendations about hyperparameters choice. The model is successfully applied to feature imputation and inpainting tasks. The experimental results show that the model is competitive with state of the art methods for both missing features imputation and image inpainting problems.
APPENDIX
A EXPERIMENTAL DETAILS
A.1 NEURAL NETWORK ARCHITECTURES
In all experiments we use optimization method Adam (Kingma & Ba, 2014), skip-connections between prior network and generative network inspired by (Mao et al., 2016), (Sønderby et al., 2016) and (Ronneberger et al., 2015), and convolutional neural networks based on ResNet blocks (He et al., 2016).
Without skip-connections all information for decoder goes through the latent variables. In image inpainting we found skip-connections very useful in both terms of log-likelihood improvement and the image realism, because latent variables are responsible for the global information only while the local information passes through skip-connections. Therefore the border between image and inpainting becomes less conspicuous.
The main idea of neural networks architecture is reflected in figure 5.
The number of hidden layers, their widths and structure may be different.
The neural networks we used for image inpainting have He-Uniform initialization of convolutional ResNet blocks, and the skip-connections are implemented using concatenation, not addition. The proposal network structure is exactly the same as the prior network except skip-connections.
Also one could use much simpler fully-connected networks with one hidden layer as a proposal, prior and generative networks in VAEAC and still obtain nice inpaintings on MNIST.
A.2 MISSING FEATURES IMPUTATION
We split the dataset into train and test set with size ratio 3:1. Before training we drop randomly 50% of values both in train and test set. We repeat each experiment 5 times with different train-test splits and dropped features and then average results and compute their standard deviation.
As we show in appendix B.2, the better results can be achieved when the model learns the concatenation of objects features x and targets y. So we treat y as an additional feature that is always unobserved during the testing time.
To train our model we use distribution p(bi|x) in which p(bi|xi = ω) = 1 and p(bi|x) = 0.2 otherwise. Also for VAEAC trainig we normalize real-valued features, fix σθ = 1 in the generative model of VAEAC in order to optimize RMSE, and use 25% of training data as validation set to select the best model among all epochs of training.
For the test set, the classifier or regressor is applied to each of the n imputed objects and the predictions are combined. For regression problems we report R2-score of combined predictions, so we use averaging as a combination method. For classification problem we report accuracy, and therefore choose the mode. We consider the workflow where the imputed values of y are not fed to the classifier or regressor to make a fair comparison of feature imputation quality.
NRMSE or PFC for dataset is computed as an average of NRMSE or PFC of all features of this dataset. NRMSE of a feature is just RMSE of imputations divided by the standard deviation of this feature. PFC of a feature is a proportion of imputations which are incorrect.
A.3 IMAGE INPAINTING DATASETS AND MASKS
MNIST is a dataset of 60000 train and 10000 test grayscale images of digits from 0 to 9 of size 28x28. We binarize all images in the dataset. For MNIST we consider Bernoulli log-likelihood as the reconstruction loss: log pθ(xb|z, x1−b, b) = ∑ i:bi=1
log Bernoulli(xi|pθ,i(z, x1−b, b)) where pθ,i(z, x1−b, b) is an output of the generative neural network. We use 16 latent variables. In the mask for this dataset the observed pixels form a three pixels wide horizontal line which position is distributed uniformly.
Omniglot is a dataset of 19280 train and 13180 test black-and-white images of different alphabets symbols of size 105x105. As in previous section, the brightness of each pixel is treated as a Bernoulli probability of it to be 1. The mask we use is a random rectangular which is described below. We use 64 latent variables. We train model for 50 epochs and choose best model according to IWAE log-likelihood estimation on the validation set after each epoch.
CelebA is a dataset of 162770 train, 19867 validation and 19962 test color images of faces of celebrities of size 178x218. Before learning we normalize the channels in dataset. We use logarithm of fully-factorized Gaussian distribution as reconstruction loss. The mask we use is a random rectangular which is describe below. We use 32 latent variables.
Rectangular mask is the common shape of unobserved region in image inpainting. We use such mask for Omniglot and Celeba. We sample the corner points of rectangles uniprobably on the image, but reject those rectangles which area is less than a quarter of the image area.
In Li et al. (2017) six different masks O1–O6 are used on the testing stage. We reconstruct the positions of masks from the illustrations in the paper and give their coordinates in table 6. The visualizations of the masks are available in figure 10.
At the training stage we used a rectangle mask with uniprobable random corners. We reject masks with width or height less than 16pt. We use 64 latent variables and take the best model over 50 epochs based on the validation IWAE log-likelihood estimation. We can obtain slightly higher PSNR values than reported in table 4 if use only masks O1–O6 at the training stage.
In Yeh et al. (2017) four types of masks are used. Center mask is just an unobserved 32x32 square in the center of 64x64 image. Half mask mean that one of upper, lower, left or right half of the image is unobserved. All these types of a half are equiprobable. Random mask means that we use pixelwise-independent Bernoulli
distribution with probability 0.8 to form a mask of unobserved pixels. Pattern mask is proposed in Pathak et al. (2016). As we deduced from the code 3, the generation process is follows: firstly we generate 600x600 one-channel image with uniform distribution over pixels, then bicubically interpolate it to image of size 10000x10000, and then apply Heaviside step function H(x− 0.25) (i. e. all points with value less than 0.25 are considered as unobserved). To sample a mask we sample a random position in this 10000x10000 binary image and crop 64x64 mask. If less than 20% or more than 30% of pixel are unobserved, than the mask is rejected and the position is sampled again. In comparison with this paper in section 5.2 we use the same distribution over masks at training and testing stages. We use VAEAC with 64 latent variables and take the best model over 50 epochs based on the validation IWAE log-likelihood estimation.
A.4 GAIN IMPLEMENTATION DETAILS
For missing feature imputation we reimplemented GAIN in PyTorch based on the paper (Yoon et al., 2018) and the available TensorFlow source code for image inpainting 4.
For categorical features we use one-hot encoding. We observe in experiments that it works better in terms of NRMSE and PFC than processing categorical features in GAIN as continuous ones and then rounding them to the nearest category.
For categorical features we also use reconstruction loss LM (xi, x′i) = − 1|Xi| ∑|Xi| j=1 xi,j log(x ′ i,j). |Xi| is the number of categories of the i-th feature, and xi,j is the j-th component of one-hot encoding of the feature xi. Such LM enforces equal contribution of each categorical feature into the whole reconstruction loss.
We use one more modification of LM (x, x′) for binary and categorical features. Cross-entropy loss in LM penalizes incorrect reconstructions of categorical and binary features much more than incorrect reconstructions for continuous ones. To avoid such imbalance we mixed L2 and cross-entropy reconstruction losses for binary and categorical features with weights 0.8 and 0.2 respectively:
L′M (xi, x ′ i) = 0.2 · LM (xi, x′i) + 0.8 ·
{ 1 |Xi| ∑|Xi| j=1(xi,j − x′i,j)2, if xi is categorical
(xi − x′i)2, if xi is binary (12)
We observe in experiments that this modification also works better in terms of NRMSE and PFC than the original model.
We use validation set which contains 5% of the observed features for the best model selection (hyperparameter is the number of iterations).
In the original GAIN paper authors propose to use cross-validation for hyper-parameter α ∈ {0.1, 0.5, 1, 2, 10}. We observe that using α = 10 and a hint h = b ◦ m + 0.5(1 − b) where vector b is sampled from Bernoulli distribution with p = 0.01 provides better results in terms of NRMSE and PFC than the original model with every α ∈ {0.1, 0.5, 1, 2, 10}. Such hint distribution makes model theoretically inconsistent but works well in practice (see table 7).
Table 7 shows that our modifications provide consistently not worse or even better imputations than the original GAIN (in terms of NRMSE and PFC, on the considered datasets). So in this paper for the missing feature imputation problem we report the results of our modification of GAIN.
3https://github.com/pathak22/context-encoder/blob/master/train_random.lua#
L273
4https://github.com/jsyoon0823/GAIN
B THEORY
B.1 VAEAC UNIVERSALITY
The theoretical guarantees that VAEAC can model arbitrary distribution are based on the same guarantees for Condtitional Variational Autoencoder (CVAE). We prove below that if CVAE can model each of the conditional distributions p(xb|x1−b), then VAEAC can model all of them. We can imagine 2D CVAEs learned each for the certain mask. Because neural networks are universal approximators, VAEAC networks could model the union of CVAE networks, so that VAEAC network performs transformation defined by the same network of the corresponding to the given mask CVAE.
pψ,V AEAC(z|x1−b, b) = pψ,CV AE,1−b(z|x1−b) ∀x, b
pθ,V AEAC(xb|z, x1−b, b) = pθ,CV AE,1−b(xb|z, x1−b) ∀z, x, b So if CVAE models any distribution p(x|y), VAEAC also do. The guarantees for CVAE in the case of continuous variables are based on the point that every smooth distribution can be approximated with a large enough mixture of Gaussians, which is a special case of CVAE’s generative model. These guarantees can be extended on the case of categorical-continuous variables also. Actually, there are distributions over categorical variables which CVAE with Gaussian prior and proposal distributions cannot learn. Nevertheless, this kind of limitation is not fundamental and is caused by poor proposal distribution family.
B.2 WHY VAEAC NEEDS TARGET VALUES FOR MISSING FEATURES IMPUTATION?
Consider a dataset with D-dimensional objects x where each feature may be missing (which we denote by xi = ω) and their target values y. In this section we show that the better results are achieved when our model learns the concatenation of objects features x and targets y. The example that shows the necessity of it is following. Consider a dataset where x1 = 1, x2 ∼ N (x2|y, 1), pd(y = 0) = p(y = 5) = 0.5. In this
case pd(x2|x1 = 1) = 0.5N (x2|0, 1) + 0.5N (x2|5, 1). We can see that generating data from pd(x2|x1) may only confuse the classifier, because with probability 0.5 it generates x2 ∼ N (0, 1) for y = 5 and x2 ∼ N (5, 1) for y = 0. On the other hand, pd(x2|x1, y) = N (x2|y, 1). Filling gaps using pd(x2|x1, y) may only improve classifier or regressor by giving it some information from the joint distribution pd(x, y) and thus simplifying the dependence to be learned at the training time. So we treat y as an additional feature that is always unobserved during the testing time.
B.3 UNIVERSAL MARGINALIZER: TRAINING PROCEDURE MODIFICATION
The problem authors did not address in the original paper is the relation between the distribution of unobserved components p(b) at the testing stage and the distribution of masks in the requests to UM p̂(b). The distribution over masks p(b) induces the distribution p̂(b), and in the most cases p(b) 6= p̂(b). The distribution p̂(b) also depends on the permutations (i1, i2, . . . , i|b|) that we use to generate objects.
We observed in experiments, that UM must be trained using unobserved mask distribution p̂(b). For example, if all masks from p(b) have a fixed number of unobserved components (e. g., D2 ), then UM will never see an example of mask with 1, 2, . . . , D2 − 1 unobserved components, which is necessary to generate a sample conditioned on D2 components. That leads to drastically low likelihood estimate for the test set and unrealistic samples.
We developed an easy generative process for p̂(b) for arbitrary p(b) if the permutation of unobserved components (i1, i2, . . . , i|b|) is chosen randomly and equiprobably: firstly we generate b0 ∼ p(b), u ∼ U [0, 1], then b1 ∼ (Bernoulli(u))D and b = b0 ◦ b1. More complicated generative process exists for a sorted permutation where ij−1 < ij ∀j : 2 ≤ j ≤ |b|. In experiments we use uniform distribution over the permutations.
C GAUSSIAN STOCHASTIC NEURAL NETWORK
Gaussian stochastic neural network (13) and hybrid model (14) are originally proposed in the paper on Conditional VAE (Sohn et al., 2015). The motivation authors mention in the paper is as follows. During training the proposal distribution qφ(z|x, y) is used to generate the latent variables z, while during the testing stage the prior pψ(z|y) is used. KL divergence tries to close the gap between two distributions but, according to authors, it is not enough. To overcome the issue authors propose to use a hybrid model (14), a weighted mixture of variational lower bound (3) and a single-sample Monte-Carlo estimation of log-likelihood (13). The model corresponding to the second term is called Gaussian Stochastic Neural Network (13), because it is a feed-forward neural network with a single Gaussian stochastic layer in the middle. Also GSNN is a special case of CVAE where qφ(z|x, y) = pψ(z|y).
LGSNN (x, y; θ, ψ) = Epψ(z|y) log pθ(x|z, y) (13) L(x, y; θ, ψ, φ) = αLCV AE(x, y; θ, ψ, φ) + (1− α)LGSNN (x, y; θ, ψ), α ∈ [0, 1] (14)
Authors report that hybrid model and GSNN outperform CVAE in terms of segmentation accuracy on the majority of datasets.
We can also add that this technique seems to soften the “holes problem” (Makhzani et al., 2016). In Makhzani et al. (2016) authors observe that vectors z from prior distribution may be different enough from all vectors z from the proposal distribution at the training stage, so the generator network may be confused at the testing stage. Due to this problem CVAE can have good reconstructions of y given z ∼ qφ(z|x, y), while samples of y given z ∼ pψ(z|x) are not realistic.
The same trick is applicable to our model as well:
LGSNN (x, b; θ, ψ) = Epψ(z|x1−b,b) log pθ(xb|z, x1−b, b) (15) L(x, b; θ, ψ, φ) = αLV AEAC(x, b; θ, ψ, φ) + (1− α)LGSNN (x, b; θ, ψ), α ∈ [0, 1] (16)
In order to reflect the difference between sampling z from prior and proposal distributions, authors of CVAE use two methods of log-likelihood estimation:
log pθ,ψ(x|y) ≈ log 1
S S∑ i=1 pθ(x|zi, y), zi ∼ pψ(z|y) (17)
log pθ,ψ(x|y) ≈ log 1
S S∑ i=1 pθ(x|zi, y)pψ(zi|y) qφ(zi|x, y) , zi ∼ qφ(z|x, y) (18)
The first estimator is called Monte-Carlo estimator and the second one is called Importance Sampling estimator (also known as IWAE). They are asymptotically equivalent, but in practice the Monte-Carlo estimator requires much more samples to obtain the same accuracy of estimation. Small S leads to underestimation of the log-likelihood for both Monte-Carlo and Importance Sampling (Burda et al., 2015), but for Monte-Carlo the underestimation is expressed much stronger.
We perform an additional study of GSNN and hybrid model and show that they have drawbacks when the target distribution p(x|y) is has multiple different local maximums.
C.1 THEORETICAL STUDY
In this section we show why GSNN cannot learn distributions with several different modes and leads to a blurry image samples.
For the simplicity of the notation we consider hybrid model for a standard VAE:
L(x;φ, ψ, θ) = αEz∼qφ(z|x) log pθ(x|z)pψ(z) qφ(z|x) + (1− α)Ez∼pψ(z) log pθ(x|z) (19)
The hybrid model (16) for VAEAC can be obtained from (19) by replacing x with xb and conditioning all distributions on x1−b and b. The validity of the further equations and conclusions remains for VAEAC after this replacement.
Consider now a categorical latent variable z which can take one of K values. Let x be a random variable with true distribution pd(x) to be modeled. Consider the following true data distribution: pd(x = xi) = 1K for i ∈ {1, 2, . . . ,K} and some values x1, x2, . . . , xK . So the true distribution has K different equiprobable modes. Suppose the generator network NNθ which models mapping from z to some vector of parameters vz = NNθ(z). Thus, we define generative distribution as some function of these parameters: pθ(x|z) = f(x, vz). Therefore, the parameters θ are just the set of v1, v2, . . . , vK . For the simplicity of the model we assume pψ(z) = 1K . Taking into account pψ(z) = 1 K , we obtain optimal q(z = i|x) = f(x,vi)∑K j=1 f(x,vj) . Using (19) and the above formulas for qφ, pψ and pθ we obtain the following optimization problem:
max v1,v2,...,vK
1
K K∑ i=1 α K∑ j=1 f(xi, vj)∑K k=1 f(xi, vk) log f(xi, vj) 1 K f(xi,vj)∑K k=1 f(xi,vk) + (1− α) K∑ j=1 1 K log f(xi, vj) (20)
It is easy to show that (20) is equivalent to the following optimization problem:
max v1,v2,...,vK K∑ i=1 α log ∑Kj=1 f(xi, vj) K + (1− α) K∑ j=1 1 K log f(xi, vj) (21) It is clear from (21) that when α = 1 the log-likelihood of the initial model is optimized. On the other hand, when α = 0 the optimal point is v1 = v2 = · · · = vK = argmaxv ∑K i=1 log f(xi, v), i. e. z doesn’t influence the generative process, and for each z generator produces the same v which maximizes likelihood estimation of the generative model f(x, v) for the given dataset of x’s. For Bernoulli and Gaussian generative distributions f such v is just average of all modes x1, x2, . . . , xK . That explains why further we observe blurry images when using GSNN model.
The same conclusion holds for for continuous latent variables instead of categorical. Given K different modes in true data distribution, VAE uses proposal network to separate prior distribution intoK components (i. e. regions in the latent space), so that each region corresponds to one mode. On the other hand, in GSNN z is sampled independently on the mode which is to be reconstructed from it, so for each z the generator have to produce parameters suitable for all modes.
From this point of view, there is no difference between VAE and VAEAC. If the true conditional distribution has several different modes, then VAEAC can fit them all, while GSNN learns their average. If true conditional distribution has one mode, GSNN and VAEAC are equal, and GSNN may even learn faster because it has less parameters.
Hybrid model is a trade-off between VAEAC and GSNN: the closer α to zero, the more blurry and closer to the average is the distribution of the model. The exact dependence of the model distribution on α can be derived analytically for the simple data distributions or evaluated experimentally. We perform such experimental evaluation in the next sections.
C.2 SYNTHETIC DATA
In this section we show that VAEAC is capable of learning a complex multimodal distribution of synthetic data while GSNN and hybrid model are not. Let x ∈ R2 and p(b1 = 1) = p(b2 = 1) = 0.5. pd(x) = 1 8 ∑8 i=1N (x|µi, 1 10I) where µi ∼ N (µi|0, I). The distribution p(x) is plotted in figure 6. The dataset contains 100000 points sampled from pd(x). We use multi-layer perceptron with four ReLU layers of size 400-200-100-50, 25-dimensional Gaussian latent variables.
For different mixture coefficients α we visualize samples from the learned distributions pψ,θ(x1, x2), pψ,θ(x1|x2), and pψ,θ(x2|x1). The observed features for the conditional distributions are generated from the marginal distributions p(x2) and p(x1) respectively.
We see in table 8 and in figure 7, that even with very small weight GSNN prevents model from learning distributions with several local optimas. GSNN also increases Monte-Carlo log-likelihood estimation with
1 0 1 x1
1
0
1
x2
0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00
Figure 6: Probability density function of synthetic data distribution.
1 0 1
1
0
1
= 1
x1 unknown: x2 p(x2)
x1 p , (x1|x2)
1 0 1
1
0
1
x2 unknown: x1 p(x1)
x2 p , (x2|x1)
1 0 1
1
0
1
x2
x1, x2 unknown: x1, x2 p , (x1, x2)
1 0 1
1
0
1
= 0.99
1 0 1
1
0
1
1 0 1
1
0
1
x2
1 0 1 x1
1
0
1
= 0.9
1 0 1 x1
1
0
1
1 0 1 x1
1
0
1
x2
Figure 7: VAEAC for synthetic data.
(a) VAEAC (b) GSNN
Figure 8: MNIST inpaintings.
Left: input. The gray pixels are unobserved. Middle: samples from the model. Right: ground truth.
a few samples and decreases much more precise Importance Sampling log-likelihood estimation. When α = 0.9 the whole distribution structure is lost.
We see that using α 6= 1 ruins multimodality of the restored distribution, so we highly recommend to use α = 1 or at least α ≈ 1.
C.3 COMPARISON ON THE IMAGE INPAINTING PROBLEM
In figure 8 we can see that the inpaintings produced by GSNN are smooth, blurry and not diverse compared with VAEAC.
Table 9 shows that VAEAC learns distribution over inpaintings better than GSNN in terms of test loglikelihood. Nevertheless, Monte-Carlo estimations with a small number of samples sometimes are better for GSNN, which means less local modes in the learned distribution and more blurriness in the samples.
D ADDITIONAL EXPERIMENTS
D.1 CONVERGENCE SPEED
In figure 9 one can see that VAEAC has similar convergence speed to VAE in terms of iterations on MNIST dataset. In our experiments we observed the same behaviour for other datasets. Each iteration of VAEAC is about 1.5 times slower than VAE due to usage of three networks instead of two.
D.2 MISSING FEATURES IMPUTATION
We evaluate the quality of imputations on different datasets (mostly from UCI (Lichman, 2013)). The evaluation is performed for VAEAC, GSNN (15) and NN (neural network; can be considered as a special case of GSNN where pθ(z|x1−b, b) is delta-function; produces single imputation). We compare these methods with MICE (Buuren & Groothuis-Oudshoorn, 2010), MissForest (Stekhoven & Bühlmann, 2011), and GAIN (Yoon et al., 2018).
We see that for some datasets MICE and MissForest outperform VAEAC, GSNN and NN. The reason is that for some datasets random forest is more natural structure than neural network.
The results also show that VAEAC, GSNN and NN show similar imputation performance in terms of NRMSE, PFC, post-imputation R2-score and accuracy. Given the result from appendix C we can take this as a weak evidence that the distribution of imputations has only one local maximum for datasets from (Lichman, 2013).
Left: input. The gray pixels are unobserved. Middle: samples from VAEAC. Right: ground truth.
D.3 FACE INPAINTINGS
In figure 10 we provide samples of VAEAC on the CelebA dataset for the masks from (Li et al., 2017).
D.4 GAIN FOR IMAGE INPAINTING
GAIN (Yoon et al., 2018) doesnt use unobserved data during training, which makes it easier to apply to the missing features imputation problem. Nevertheless, it turns into a disadvantage when the fully-observed training data is available but the missingness rate at the testing stage is high.
We consider the horizontal line mask for MNIST which is described in appendix A.3. We use the released GAIN code 5 with a different mask generator. The inpaintings from VAEAC which uses the unobserved pixels during training are available in figure 1. The inpaintings from GAIN which ignores unobserved pixels are provided in figure 11. As can be seen in figure 11, GAIN fails to learn conditional distribution for given mask distribution p(b).
Nevertheless, we don’t claim that GAIN is not suitable for image inpainting. As it was shown in the supplementary of (Yoon et al., 2018) and in the corresponding code, GAIN is able to learn conditional distributions when p(b) is pixel-wise independent Bernoulli distribution with probability 0.5.
5https://github.com/jsyoon0823/GAIN
Left: input. The gray pixels are unobserved. Middle: samples from the model. Right: ground truth.
Left: input. The gray pixels are unobserved. Middle: samples from the model. Right: ground truth.
D.5 UNIVERSAL MARGINALIZER: ILLUSTRATIONS
In figure 12 we provide samples of Universal Marginalizer (UM) and VAEAC for the same inputs.
Consider the case when UM marginal distributions are parametrized with Gaussians. The most simple example of a distribution, which UM cannot learn but VAEAC can, is given in figure 13. | 1. What is the main contribution of the paper regarding missing data imputation?
2. What are the strengths and weaknesses of the proposed approach, particularly in its novelty and clarity?
3. Do you have any concerns or suggestions regarding the experimental results and their presentation?
4. How does the reviewer assess the quality and reproducibility of the paper's content?
5. Are there any questions or issues regarding the paper's notation, grammar, and spelling? | Review | Review
The goal of this paper is to use deep generative models for missing data imputation. This paper proposes learning a latent variable deep generative model over every randomly sampled subset of observed features. First, a masking variable is sampled from a chosen prior distribution. The mask determines which features are observed. Then, the likelihood of the observed features is maximized via a lower bound. Inference in this latent variable model is achieved through the use of an inference network which conditions on the set of "missing" (to the generative model) features.
Novelty:
Generative models have a long history of being used to impute missing data. e.g. http://www.cs.toronto.edu/~fritz/absps/ranzato_cvpr2011.pdf, https://arxiv.org/pdf/1610.04167.pdf,
https://arxiv.org/pdf/1808.01684.pdf, https://arxiv.org/pdf/1401.4082.pdf [Appendix F]
It is a little difficult to guage what the novelty of this work is.
Clarity
This is a poorly written paper. Distilling the proposed methodology down to one paragraph was challenging since the text meanders through several concepts whose relevance to the overarching goal is questionable. For example, it is not clear what Section 3.2 adds to the discussion. The text describes a heuristic used in learning GSNNs only to say that the loss function used by GSNNs is not used in the experimental section for this paper -- this renders most of 4.3.2 redundant. There are issues like awkward grammar, sloppy notation, and spelling mistakes (please run spell check!) throughout the manuscript. Please use a different notation when referring to the variational distributions (do not re-use "p").
Experimental Results
The model is evaluated against MICE and MissForest on UCI datasets. RMSE and accuracy of classification (from imputed data is compared). The complexity of data considered is simplistic (and may not make use of the expressivity of the deep generative model). Why not run these experiments on datasets like MNIST and Omniglot?
Beyond that:
(a) was there any comparison to how classification performance behaves when using another neural network based imputation baseline (e.g. the method in Yoon et. al)?
(b) the *kind* of missingness considered here appears to be MCAR (the easiest kind to tackle) -- did you consider experiments with other kinds of missingess?
The qualitative results presented in this work are interesting. The method does appear to produce more diverse in-paintings than the method from Yeh et. al (though the examples considered are not aligned).
Table 5 claims negative log-likelihood numbers on MNIST as low as 61 and 41 (I assume nats...). These numbers do not make sense. How were they computed?
Priors on b:
What kind of priors on b did you experiment with? |
ICLR | Title
Variational Autoencoder with Arbitrary Conditioning
Abstract
We propose a single neural probabilistic model based on variational autoencoder that can be conditioned on an arbitrary subset of observed features and then sample the remaining features in “one shot”. The features may be both real-valued and categorical. Training of the model is performed by stochastic variational Bayes. The experimental evaluation on synthetic data, as well as feature imputation and image inpainting problems, shows the effectiveness of the proposed approach and diversity of the generated samples.
1 INTRODUCTION
In past years, a number of generative probabilistic models based on neural networks have been proposed. The most popular approaches include variational autoencoder (Kingma & Welling, 2013) (VAE) and generative adversarial net (Goodfellow et al., 2014) (GANs). They learn a distribution over objects p(x) and allow sampling from this distribution.
In many cases, we are interested in learning a conditional distribution p(x|y). For instance, if x is an image of a face, y could be the characteristics describing the face (are glasses present or not; length of hair, etc.) Conditional variational autoencoder (Sohn et al., 2015) and conditional generative adversarial nets (Mirza & Osindero, 2014) are popular methods for this problem.
In this paper, we consider the problem of learning all conditional distributions of the form p(xI |xU\I), where U is the set of all features and I is its arbitrary subset. This problem generalizes both learning the joint distribution p(x) and learning the conditional distribution p(x|y). To tackle this problem, we propose a Variational Autoencoder with Arbitrary Conditioning (VAEAC) model. It is a latent variable model similar to VAE, but allows conditioning on an arbitrary subset of the features. The conditioning features affect the prior on the latent Gaussian variables which are used to generate unobserved features. The model is trained using stochastic gradient variational Bayes (Kingma & Welling, 2013).
We consider two most natural applications of the proposed model. The first one is feature imputation where the goal is to restore the missing features given the observed ones. The imputed values may be valuable by themselves or may improve the performance of other machine learning algorithms which process the dataset. Another application is image inpainting in which the goal is to fill in an unobserved part of an image with an artificial content in a realistic way. This can be used for removing unnecessary objects from the images or, vice versa, for complementing the partially closed or corrupted object.
∗Author is in DeepMind now.
The experimental evaluation shows that the proposed model successfully samples from the conditional distributions. The distribution over samples is close to the true conditional distribution. This property is very important when the true distribution has several modes. The model is shown to be effective in feature imputation problem which helps to increase the quality of subsequent discriminative models on different problems from UCI datasets collection (Lichman, 2013). We demonstrate that model can generate diverse and realistic image inpaintings on MNIST (LeCun et al., 1998), Omniglot (Lake et al., 2015) and CelebA (Liu et al., 2015) datasets, and works even better than the current state of the art inpainting techniques in terms of peak signal to noise ratio (PSNR).
The paper is organized as follows. In section 2 we review the related works. In section 3 we briefly describe variational autoencoders and conditional variational autoencoders. In section 4 we define the problem, describe the VAEAC model and its training procedure. In section 5 we evaluate VAEAC. Section 6 concludes the paper. Appendix contains additional explanations, theoretical analysis, and experiments for VAEAC.
2 RELATED WORK
Universal Marginalizer (Douglas et al., 2017) is a model based on a feed-forward neural network which approximates marginals of unobserved features conditioned on observable values. A related idea of an autoregressive model of joint probability was previously proposed in Germain et al. (2015) and Uria et al. (2016). The description of the model and comparison with VAEAC are available in section 5.3.
Yoon et al. (2018) propose a GANs-based model called GAIN which solves the same problem as VAEAC. In contrast to VAEAC, GAIN does not use unobserved data during training, which makes it easier to apply to the missing features imputation problem. Nevertheless, it turns into a disadvantage when the fully-observed training data is available but the missingness rate at the testing stage is high. For example, in inpainting setting GAIN cannot learn the conditional distribution over MNIST digits given one horizontal line of the image while VAEAC can (see appendix D.4). The comparison of VAEAC and GAIN on the missing feature imputation problem is given in section 5.1 and appendix D.2.
Rezende et al. (2014) [Appendix F], Sohl-Dickstein et al. (2015), Goyal et al. (2017), and Bordes et al. (2017) propose to fill missing data with noise and run Markov chain with a learned transition operator. The stationary distribution of such chains approximates the true conditional distribution of the unobserved features. Bachman & Precup (2015) consider missing feature imputation in terms of Markov decision process and propose LSTM-based sequential decision making model to solve it. Nevertheless, these methods are computationally expensive at the test time and require fully-observed training data.
Image inpainting is a classic computer vision problem. Most of the earlier methods rely on local and texture information or hand-crafted problem-specific features (Bertalmio et al., 2000). In past years multiple neural network based approaches have been proposed.
Pathak et al. (2016), Yeh et al. (2016) and Yang et al. (2017) use different kinds and combinations of adversarial, reconstruction, texture and other losses. Li et al. (2017) focuses on face inpainting and uses two adversarial losses and one semantic parsing loss to train the generative model. In Yeh et al. (2017) GANs are first trained on the whole training dataset. The inpainting is an optimization procedure that finds the latent variables that explain the observed features best. Then, the obtained latents are passed through the generative model to restore the unobserved portion of the image. We can say that VAEAC is a similar model which uses prior network to find a proper latents instead of solving the optimization problem.
All described methods aim to produce a single realistic inpainting, while VAEAC is capable of sampling diverse inpaintings. Additionally, Yeh et al. (2016), Yang et al. (2017) and Yeh et al. (2017) have high testtime computational complexity of inpainting, because they require an optimization problem to be solved. On the other hand, VAEAC is a “single-shot” method with a low computational cost.
3 BACKGROUND
3.1 VARIATIONAL AUTOENCODER
Variational autoencoder (Kingma & Welling, 2013) (VAE) is a directed generative model with latent variables. The generative process in variational autoencoder is as follows: first, a latent variable z is generated from the prior distribution p(z), and then the data x is generated from the generative distribution pθ(x|z), where θ are the generative model’s parameters. This process induces the distribution pθ(x) = Ep(z)pθ(x|z). The distribution pθ(x|z) is modeled by a neural network with parameters θ. p(z) is a standard Gaussian distribution.
The parameters θ are tuned by maximizing the likelihood of the training data points {xi}Ni=1 from the true data distribution pd(x). In general, this optimization problem is challenging due to intractable posterior inference. However, a variational lower bound can be optimized efficiently using backpropagation and stochastic gradient descent:
log pθ(x) = Eqφ(z|x) log pθ(x, z)
qφ(z|x) +DKL(qφ(z|x)‖p(z|x, θ))
≥ Eqφ(z|x) log pθ(x|z)−DKL(qφ(z|x)‖p(z)) = LV AE(x; θ, φ) (1)
Here qφ(z|x) is a proposal distribution parameterized by neural network with parameters φ that approximates the posterior p(z|x, θ). Usually this distribution is Gaussian with a diagonal covariance matrix. The closer qφ(z|x) to p(z|x, θ), the tighter variational lower bound LV AE(θ, φ). To compute the gradient of the variational lower bound with respect to φ, reparameterization trick is used: z = µφ(x) + εσφ(x) where ε ∼ N (0, I) and µφ and σφ are deterministic functions parameterized by neural networks. So the gradient can be estimated using Monte-Carlo method for the first term and computing the second term analytically:
∂LV AE(x; θ, φ)
∂φ = Eε∼N (0,I)
∂
∂φ log pθ(x|µφ(x) + εσφ(x))−
∂
∂φ DKL(qφ(z|x)‖p(z)) (2)
So LV AE(θ, φ) can be optimized using stochastic gradient ascent with respect to φ and θ.
3.2 CONDITIONAL VARIATIONAL AUTOENCODER
Conditional variational autoencoder (Sohn et al., 2015) (CVAE) approximates the conditional distribution pd(x|y). It outperforms deterministic models when the distribution pd(x|y) is multi-modal (diverse xs are probable for the given y). For example, assume that x is a real-valued image. Then, a deterministic regression model with mean squared error loss would predict the average blurry value for x. On the other hand, CVAE learns the distribution of x, from which one can sample diverse and realistic objects.
Variational lower bound for CVAE can be derived similarly to VAE by conditioning all considered distributions on y:
LCV AE(x, y; θ, ψ, φ) = Eqφ(z|x,y) log pθ(x|z, y)−DKL(qφ(z|x, y)‖pψ(z|y)) ≤ log pθ,ψ(x|y) (3)
Similarly to VAE, this objective is optimized using the reparameterization trick. Note that the prior distribution pψ(z|y) is conditioned on y and is modeled by a neural network with parameters ψ. Thus, CVAE uses three trainable neural networks, while VAE only uses two.
Also authors propose such modifications of CVAE as Gaussian stochastic neural network and hybrid model. These modifications can be applied to our model as well. Nevertheless, we don’t use them, because of their disadvantage which is described in appendix C.
4 VARIATIONAL AUTOENCODER WITH ARBITRARY CONDITIONING
4.1 PROBLEM STATEMENT
Consider a distribution pd(x) over a D-dimensional vector x with real or categorical components. The components of the vector are called features.
Let binary vector b ∈ {0, 1}D be the binary mask of unobserved features of the object. Then we describe the vector of unobserved features as xb = {xi:bi=1}. For example, x(0,1,1,0,1) = (x2, x3, x5). Using this notation we denote x1−b as a vector of observed features.
Our goal is to build a model of the conditional distribution pψ,θ(xb|x1−b, b) ≈ pd(xb|x1−b, b) for an arbitrary b, where ψ and θ are parameters that are used in our model at the testing stage.
However, the true distribution pd(xb|x1−b, b) is intractable without strong assumptions about pd(x). Therefore, our model pψ,θ(xb|x1−b, b) has to be more precise for some b and less precise for others. To formalize our requirements about the accuracy of our model we introduce the distribution p(b) over different unobserved feature masks. The distribution p(b) is arbitrary and may be defined by the user depending on the problem. Generally it should have full support over {0, 1}D so that pψ,θ(xb|x1−b, b) can evaluate arbitrary conditioning. Nevertheless, it is not necessary if the model is used for specific kinds of conditioning (as we do in section 5.2).
Using p(b) we can introduce the following log-likelihood objective function for the model:
max ψ,θ Epd(x)Ep(b) log pψ,θ(xb|x1−b, b) (4)
The special cases of the objective (4) are variational autoencoder (bi = 1 ∀i ∈ {1, . . . , D}) and conditional variational autoencoder (b is constant).
4.2 MODEL DESCRIPTION
The generative process of our model is similar to the generative process of CVAE: for each object firstly we generate z ∼ pψ(z|x1−b, b) using prior network, and then sample unobserved features xb ∼ pθ(xb|z, x1−b, b) using generative network. This process induces the following model distribution over unobserved features:
pψ,θ(xb|x1−b, b) = Ez∼pψ(z|x1−b,b)pθ(xb|z, x1−b, b) (5)
We use z ∈ Rd, and Gaussian distribution pψ over z, with parameters from a neural network with weights ψ: pψ(z|x1−b, b, ψ) = N (z|µψ(x1−b, b), σ2ψ(x1−b, b)I). The real-valued components of distribution pθ(xb|z, x1−b, b) are defined likewise. Each categorical component i of distribution pθ(xi|z, x1−b, b) is parameterized by a function wi,θ(z, x1−b, b), whose outputs are logits of probabilities for each category: xi ∼ Cat[Softmax(wi,θ(z, x1−b, b))]. Therefore the components of the latent vector z are conditionally independent given x1−b and b, and the components of xb are conditionally independent given z, x1−b and b.
The variables xb and x1−b have variable length that depends on b. So in order to use architectures such as multi-layer perceptron and convolutional neural network we consider x1−b = x ◦ (1 − b) where ◦ is an element-wise product. So in implementation x1−b has fixed length. The output of the generative network also has a fixed length, but we use only unobserved components to compute likelihood.
The theoretical analysis of the model is available in appendix B.1.
4.3 LEARNING VARIATIONAL AUTOENCODER WITH ARBITRARY CONDITIONING
4.3.1 VARIATIONAL LOWER BOUND
We can derive a lower bound for log pψ,θ(xb|x1−b, b) as for variational autoencoder:
log pψ,θ(xb|x1−b, b) = Eqφ(z|x,b) log pψ,θ(xb, z|x1−b, b)
qφ(z|x, b) +DKL(qφ(z|x, b)‖pψ,θ(z|x, b))
≥ Eqφ(z|x,b) log pθ(xb|z, x1−b, b)−DKL(qφ(z|x, b)‖pψ(z|x1−b, b)) = LV AEAC(x, b; θ, ψ, φ) (6)
Therefore we have the following variational lower bound optimization problem:
max θ,ψ,φ Epd(x)Ep(b)LV AEAC(x, b; θ, ψ, φ) (7)
We use fully-factorized Gaussian proposal distribution qφ which allows us to perform reparameterization trick and compute KL divergence analytically in order to optimize (7).
4.3.2 PRIOR IN LATENT SPACE
During the optimization of objective (7) the parameters µψ and σψ of the prior distribution of z may tend to infinity, since there is no penalty for large values of those parameters. We usually observe the growth of ‖z‖2 during training, though it is slow enough. To prevent potential numerical instabilities, we put a Normal-Gamma prior on the parameters of the prior distribution to prevent the divergence. Formally, we redefine pψ(z|x1−b, b) as follows:
pψ(z, µψ, σψ|x1−b, b) = N (z|µψ, σ2ψ)N (µψ|0, σµ)Gamma(σψ|2, σσ) (8)
As a result, the regularizers − µ 2 ψ
2σ2µ and σσ(log(σψ) − σψ) are added to the model log-likelihood. Hyperpa-
rameter σµ is chosen to be large (104) and σσ is taken to be a small positive number (10−4). This distribution is close to uniform near zero, so it doesn’t affect the learning process significantly.
4.3.3 MISSING FEATURES
The optimization objective (7) requires all features of each object at the training stage: some of the features will be observed variables at the input of the model and other will be unobserved features used to evaluate the model. Nevertheless, in some problem settings the training data contains missing features too. We propose the following slight modification of the problem (7) in order to cover such problems as well.
The missing values cannot be observed so xi = ω ⇒ bi = 1, where ω describes the missing value in the data. In order to meet this requirement, we redefine mask distribution as conditioned on x: p(b) turns into p(b|x) in (4) and (7). In the reconstruction loss (5) we simply omit the missing features, i. e. marginalize them out:
log pθ(xb|z, x1−b, b) = ∑
i:bi=1,xi 6=ω
log pθ(xi|z, x1−b, b) (9)
The proposal network must be able to determine which features came from real object and which are just missing. So we use additional missing features mask which is fed to proposal network together with unobserved features mask b and object x.
The proposed modifications are evaluated in section 5.1.
5 EXPERIMENTS
In this section we validate the performance of VAEAC using several real-world datasets. In the first set of experiments we evaluate VAEAC missing features imputation performance using various UCI datasets (Lichman, 2013). We compare imputations from our model with imputations from such classical methods as MICE (Buuren & Groothuis-Oudshoorn, 2010) and MissForest (Stekhoven & Bühlmann, 2011) and recently proposed GANs-based method GAIN (Yoon et al., 2018). In the second set of experiments we use VAEAC to solve image inpainting problem. We show inpainitngs generated by VAEAC and compare our model with models from papers Pathak et al. (2016), Yeh et al. (2017) and Li et al. (2017) in terms of peak signal-to-noise ratio (PSNR) of obtained inpaintings on CelebA dataset (Liu et al., 2015) . And finally, we evaluate VAEAC against the competing method called Universal Marginalizer (Douglas et al., 2017). Additional experiments can be found in appendices C and D. The code is available at https://github.com/tigvarts/ vaeac.
5.1 MISSING FEATURES IMPUTATION
The datasets with missing features are widespread. Consider a dataset with D-dimensional objects x where each feature may be missing (which we denote by xi = ω) and their target values y. The majority of discriminative methods do not support missing values in the objects. The procedure of filling in the missing features values is called missing features imputation.
In this section we evaluate the quality of imputations produced by VAEAC. For evaluation we use datasets from UCI repository (Lichman, 2013). Before training we drop randomly 50% of values both in train and test set. After that we impute missing features using MICE (Buuren & Groothuis-Oudshoorn, 2010), MissForest (Stekhoven & Bühlmann, 2011), GAIN (Yoon et al., 2018) and VAEAC trained on the observed data. The details of GAIN implementation are described in appendix A.4.
Our model learns the distribution of the imputations, so it is able to sample from this distribution. We replace each object with missing features by n = 10 objects with sampled imputations, so the size of the dataset increases by n times. This procedure is called missing features multiple imputation. MICE and GAIN are also capable of multiple imputation (we use n = 10 for them in experiments as well), but MissForest is not.
For more details about the experimental setup see appendices A.1, A.2, and A.4.
In table 1 we report NRMSE (i.e. RMSE normalized by the standard deviation of each feature and then averaged over all features) of imputations for continuous datasets and proportion of falsely classified (PFC) for categorical ones. For multiple imputation methods we average imputations of continuous variables and take most frequent imputation for categorical ones for each object.
We also learn linear or logistic regression and report the regression or classification performance after applying imputations of different methods in table 2. For multiple imputation methods we average predictions for continuous targets and take most frequent prediction for categorical ones for each object in test set.
As can be seen from the tables 1 and 2, VAEAC can learn joint data distribution and use it for missing feature imputation. The imputations are competitive with current state of the art imputation methods in terms of RMSE, PFC, post-imputation regression R2-score and classification accuracy. Nevertheless, we don’t claim that our method is state of the art in missing features imputation; for some datasets MICE or MissForest outperform it. The additional experiments can be found in appendix D.2.
5.2 IMAGE INPAINTING
The image inpainting problem has a number of different formulations. The formulation of our interest is as follows: some of the pixels of an image are unobserved and we want to restore them in a natural way. Unlike the majority of papers, we want to restore not just one most probable inpainting, but the distribution over all possible inpaintings from which we can sample. This distribution is extremely multi-modal because often there is a lot of different possible ways to inpaint the image.
Unlike the previous subsection, here we have uncorrupted images without missing features in the training set, so p(b|x) = p(b). As we show in section 2, state of the art results use different adversarial losses to achieve more sharp and realistic samples. VAEAC can be adapted to the image inpainting problem by using a combination of those adversarial losses as a part of reconstruction loss pθ(xb|z, x1−b, b). Nevertheless, such construction is out of scope for this research, so we leave it for the future work. In the current work we show that the model can generate both diverse and realistic inpaintings.
In figures 1, 2, 3 and 4 we visualize image inpaintings produced by VAEAC on binarized MNIST (LeCun et al., 1998), Omniglot (Lake et al., 2015) and CelebA (Liu et al., 2015). The details of learning procedure and description of datasets are available in appendixes A.1 and A.3.
To the best of our knowledge, the most modern inpainting papers don’t consider the diverse inpainting problem, where the goal is to build diverse image inpaintings, so there is no straightforward way to compare with these models. Nevertheless, we compute peak signal-to-noise ratio (PSNR) for one random inpainting from VAEAC and the best PSNR among 10 random inpaintings from VAEAC. One inpainting might not be similar to the original image, so we also measure how good the inpainting which is most similar to the original image reconstructs it. We compare these two metrics computed for certain masks with the PSNRs for the same masks on CelebA from papers Yeh et al. (2017) and Li et al. (2017). The results are available in tables 3 and 4.
We observe that for the majority of proposed masks our model outperforms the competing methods in terms of PSNR even with one sample, and for the rest (where the inpaintings are significantly diverse) the best PSNR over 10 inpaintings is larger than the same PSNR of the competing models. Even if PSNR does not reflect completely the visual quality of images and tends to encourage blurry VAE samples instead of realistic GANs samples, the results show that VAEAC is able to solve inpainting problem comparably to the state of the art methods. The disadvantage of VAEAC compared to Yeh et al. (2017) and Li et al. (2017) (but
not Pathak et al. (2016)) is that it needs the distribution over masks at the training stage to be similar to the distribution over them at the test stage. However, it is not a very strict limitation for the practical usage.
5.3 UNIVERSAL MARGINALIZER
Universal Marginalizer (Douglas et al., 2017) (UM) is a model which uses a single neural network to estimate the marginal distributions over the unobserved features. So it optimizes the following objective:
max θ Ex∼pd(x)Eb∼p(b) D∑ i=1 bi log pθ(xi|x1−b, b) (10)
For given mask b we fix a permutation of its unobserved components: (i1, i2, . . . , i|b|), where |b| is a number of unobserved components. Using the learned model and the permutation we can generate objects from joint distribution and estimate their probability using chain rule.
log pθ(xb|x1−b, b) = |b|∑ j=1 log pθ(xij |x1−(b−∑j−1k=1 eik ), b− j−1∑ k=1 eik) (11)
For example, pθ(x1, x4, x5|x2, x3) = pθ(x4|x2, x3)pθ(x1|x2, x3, x4)pθ(x5|x1, x2, x3, x4). Conditional sampling or conditional likelihood estimation for one object requires |b| requests to UM to compute pθ(xi|x1−b, b). Each request is a forward pass through the neural network. In the case of conditional sampling those requests even cannot be paralleled because the input of the next request contains the output of the previous one.
We propose a slight modification of the original UM training procedure which allows learning UM efficiently for any kind of masks including those considered in this paper. The details of the modification are described in appendix B.3.
1The results are from the paper (Yeh et al., 2017) 2The results are from the paper (Li et al., 2017)
Left: input. The gray pixels are unobserved. Middle: samples from VAEAC. Right: ground truth.
The results of using this modification of UM are provided in table 5. We can say that the relation between VAEAC and UM is similar to the relation between VAE and PixelCNN. The second one is much slower at the testing stage, but it easily takes into account local dependencies in data while the first one is faster but assumes conditional independence of the outputs. Nevertheless, there are a number of cases where UM cannot learn the distribution well while VAEAC can. For example, when the data is real-valued and marginal distributions have many local optima, there is no straightforward parametrization which allows UM to approximate them, and, therefore also the conditioned joint distribution. An example of such distribution and more illustrations for comparison of VAEAC and UM are available in appendix D.5.
6 CONCLUSION
In this paper we consider the problem of simultaneous learning of all conditional distributions for a vector. This problem has a number of different special cases with practical applications. We propose neural network based probabilistic model for distribution conditioning learning with Gaussian latent variables. This model is scalable and efficient in inference and learning. We propose several tricks to improve optimization and give recommendations about hyperparameters choice. The model is successfully applied to feature imputation and inpainting tasks. The experimental results show that the model is competitive with state of the art methods for both missing features imputation and image inpainting problems.
APPENDIX
A EXPERIMENTAL DETAILS
A.1 NEURAL NETWORK ARCHITECTURES
In all experiments we use optimization method Adam (Kingma & Ba, 2014), skip-connections between prior network and generative network inspired by (Mao et al., 2016), (Sønderby et al., 2016) and (Ronneberger et al., 2015), and convolutional neural networks based on ResNet blocks (He et al., 2016).
Without skip-connections all information for decoder goes through the latent variables. In image inpainting we found skip-connections very useful in both terms of log-likelihood improvement and the image realism, because latent variables are responsible for the global information only while the local information passes through skip-connections. Therefore the border between image and inpainting becomes less conspicuous.
The main idea of neural networks architecture is reflected in figure 5.
The number of hidden layers, their widths and structure may be different.
The neural networks we used for image inpainting have He-Uniform initialization of convolutional ResNet blocks, and the skip-connections are implemented using concatenation, not addition. The proposal network structure is exactly the same as the prior network except skip-connections.
Also one could use much simpler fully-connected networks with one hidden layer as a proposal, prior and generative networks in VAEAC and still obtain nice inpaintings on MNIST.
A.2 MISSING FEATURES IMPUTATION
We split the dataset into train and test set with size ratio 3:1. Before training we drop randomly 50% of values both in train and test set. We repeat each experiment 5 times with different train-test splits and dropped features and then average results and compute their standard deviation.
As we show in appendix B.2, the better results can be achieved when the model learns the concatenation of objects features x and targets y. So we treat y as an additional feature that is always unobserved during the testing time.
To train our model we use distribution p(bi|x) in which p(bi|xi = ω) = 1 and p(bi|x) = 0.2 otherwise. Also for VAEAC trainig we normalize real-valued features, fix σθ = 1 in the generative model of VAEAC in order to optimize RMSE, and use 25% of training data as validation set to select the best model among all epochs of training.
For the test set, the classifier or regressor is applied to each of the n imputed objects and the predictions are combined. For regression problems we report R2-score of combined predictions, so we use averaging as a combination method. For classification problem we report accuracy, and therefore choose the mode. We consider the workflow where the imputed values of y are not fed to the classifier or regressor to make a fair comparison of feature imputation quality.
NRMSE or PFC for dataset is computed as an average of NRMSE or PFC of all features of this dataset. NRMSE of a feature is just RMSE of imputations divided by the standard deviation of this feature. PFC of a feature is a proportion of imputations which are incorrect.
A.3 IMAGE INPAINTING DATASETS AND MASKS
MNIST is a dataset of 60000 train and 10000 test grayscale images of digits from 0 to 9 of size 28x28. We binarize all images in the dataset. For MNIST we consider Bernoulli log-likelihood as the reconstruction loss: log pθ(xb|z, x1−b, b) = ∑ i:bi=1
log Bernoulli(xi|pθ,i(z, x1−b, b)) where pθ,i(z, x1−b, b) is an output of the generative neural network. We use 16 latent variables. In the mask for this dataset the observed pixels form a three pixels wide horizontal line which position is distributed uniformly.
Omniglot is a dataset of 19280 train and 13180 test black-and-white images of different alphabets symbols of size 105x105. As in previous section, the brightness of each pixel is treated as a Bernoulli probability of it to be 1. The mask we use is a random rectangular which is described below. We use 64 latent variables. We train model for 50 epochs and choose best model according to IWAE log-likelihood estimation on the validation set after each epoch.
CelebA is a dataset of 162770 train, 19867 validation and 19962 test color images of faces of celebrities of size 178x218. Before learning we normalize the channels in dataset. We use logarithm of fully-factorized Gaussian distribution as reconstruction loss. The mask we use is a random rectangular which is describe below. We use 32 latent variables.
Rectangular mask is the common shape of unobserved region in image inpainting. We use such mask for Omniglot and Celeba. We sample the corner points of rectangles uniprobably on the image, but reject those rectangles which area is less than a quarter of the image area.
In Li et al. (2017) six different masks O1–O6 are used on the testing stage. We reconstruct the positions of masks from the illustrations in the paper and give their coordinates in table 6. The visualizations of the masks are available in figure 10.
At the training stage we used a rectangle mask with uniprobable random corners. We reject masks with width or height less than 16pt. We use 64 latent variables and take the best model over 50 epochs based on the validation IWAE log-likelihood estimation. We can obtain slightly higher PSNR values than reported in table 4 if use only masks O1–O6 at the training stage.
In Yeh et al. (2017) four types of masks are used. Center mask is just an unobserved 32x32 square in the center of 64x64 image. Half mask mean that one of upper, lower, left or right half of the image is unobserved. All these types of a half are equiprobable. Random mask means that we use pixelwise-independent Bernoulli
distribution with probability 0.8 to form a mask of unobserved pixels. Pattern mask is proposed in Pathak et al. (2016). As we deduced from the code 3, the generation process is follows: firstly we generate 600x600 one-channel image with uniform distribution over pixels, then bicubically interpolate it to image of size 10000x10000, and then apply Heaviside step function H(x− 0.25) (i. e. all points with value less than 0.25 are considered as unobserved). To sample a mask we sample a random position in this 10000x10000 binary image and crop 64x64 mask. If less than 20% or more than 30% of pixel are unobserved, than the mask is rejected and the position is sampled again. In comparison with this paper in section 5.2 we use the same distribution over masks at training and testing stages. We use VAEAC with 64 latent variables and take the best model over 50 epochs based on the validation IWAE log-likelihood estimation.
A.4 GAIN IMPLEMENTATION DETAILS
For missing feature imputation we reimplemented GAIN in PyTorch based on the paper (Yoon et al., 2018) and the available TensorFlow source code for image inpainting 4.
For categorical features we use one-hot encoding. We observe in experiments that it works better in terms of NRMSE and PFC than processing categorical features in GAIN as continuous ones and then rounding them to the nearest category.
For categorical features we also use reconstruction loss LM (xi, x′i) = − 1|Xi| ∑|Xi| j=1 xi,j log(x ′ i,j). |Xi| is the number of categories of the i-th feature, and xi,j is the j-th component of one-hot encoding of the feature xi. Such LM enforces equal contribution of each categorical feature into the whole reconstruction loss.
We use one more modification of LM (x, x′) for binary and categorical features. Cross-entropy loss in LM penalizes incorrect reconstructions of categorical and binary features much more than incorrect reconstructions for continuous ones. To avoid such imbalance we mixed L2 and cross-entropy reconstruction losses for binary and categorical features with weights 0.8 and 0.2 respectively:
L′M (xi, x ′ i) = 0.2 · LM (xi, x′i) + 0.8 ·
{ 1 |Xi| ∑|Xi| j=1(xi,j − x′i,j)2, if xi is categorical
(xi − x′i)2, if xi is binary (12)
We observe in experiments that this modification also works better in terms of NRMSE and PFC than the original model.
We use validation set which contains 5% of the observed features for the best model selection (hyperparameter is the number of iterations).
In the original GAIN paper authors propose to use cross-validation for hyper-parameter α ∈ {0.1, 0.5, 1, 2, 10}. We observe that using α = 10 and a hint h = b ◦ m + 0.5(1 − b) where vector b is sampled from Bernoulli distribution with p = 0.01 provides better results in terms of NRMSE and PFC than the original model with every α ∈ {0.1, 0.5, 1, 2, 10}. Such hint distribution makes model theoretically inconsistent but works well in practice (see table 7).
Table 7 shows that our modifications provide consistently not worse or even better imputations than the original GAIN (in terms of NRMSE and PFC, on the considered datasets). So in this paper for the missing feature imputation problem we report the results of our modification of GAIN.
3https://github.com/pathak22/context-encoder/blob/master/train_random.lua#
L273
4https://github.com/jsyoon0823/GAIN
B THEORY
B.1 VAEAC UNIVERSALITY
The theoretical guarantees that VAEAC can model arbitrary distribution are based on the same guarantees for Condtitional Variational Autoencoder (CVAE). We prove below that if CVAE can model each of the conditional distributions p(xb|x1−b), then VAEAC can model all of them. We can imagine 2D CVAEs learned each for the certain mask. Because neural networks are universal approximators, VAEAC networks could model the union of CVAE networks, so that VAEAC network performs transformation defined by the same network of the corresponding to the given mask CVAE.
pψ,V AEAC(z|x1−b, b) = pψ,CV AE,1−b(z|x1−b) ∀x, b
pθ,V AEAC(xb|z, x1−b, b) = pθ,CV AE,1−b(xb|z, x1−b) ∀z, x, b So if CVAE models any distribution p(x|y), VAEAC also do. The guarantees for CVAE in the case of continuous variables are based on the point that every smooth distribution can be approximated with a large enough mixture of Gaussians, which is a special case of CVAE’s generative model. These guarantees can be extended on the case of categorical-continuous variables also. Actually, there are distributions over categorical variables which CVAE with Gaussian prior and proposal distributions cannot learn. Nevertheless, this kind of limitation is not fundamental and is caused by poor proposal distribution family.
B.2 WHY VAEAC NEEDS TARGET VALUES FOR MISSING FEATURES IMPUTATION?
Consider a dataset with D-dimensional objects x where each feature may be missing (which we denote by xi = ω) and their target values y. In this section we show that the better results are achieved when our model learns the concatenation of objects features x and targets y. The example that shows the necessity of it is following. Consider a dataset where x1 = 1, x2 ∼ N (x2|y, 1), pd(y = 0) = p(y = 5) = 0.5. In this
case pd(x2|x1 = 1) = 0.5N (x2|0, 1) + 0.5N (x2|5, 1). We can see that generating data from pd(x2|x1) may only confuse the classifier, because with probability 0.5 it generates x2 ∼ N (0, 1) for y = 5 and x2 ∼ N (5, 1) for y = 0. On the other hand, pd(x2|x1, y) = N (x2|y, 1). Filling gaps using pd(x2|x1, y) may only improve classifier or regressor by giving it some information from the joint distribution pd(x, y) and thus simplifying the dependence to be learned at the training time. So we treat y as an additional feature that is always unobserved during the testing time.
B.3 UNIVERSAL MARGINALIZER: TRAINING PROCEDURE MODIFICATION
The problem authors did not address in the original paper is the relation between the distribution of unobserved components p(b) at the testing stage and the distribution of masks in the requests to UM p̂(b). The distribution over masks p(b) induces the distribution p̂(b), and in the most cases p(b) 6= p̂(b). The distribution p̂(b) also depends on the permutations (i1, i2, . . . , i|b|) that we use to generate objects.
We observed in experiments, that UM must be trained using unobserved mask distribution p̂(b). For example, if all masks from p(b) have a fixed number of unobserved components (e. g., D2 ), then UM will never see an example of mask with 1, 2, . . . , D2 − 1 unobserved components, which is necessary to generate a sample conditioned on D2 components. That leads to drastically low likelihood estimate for the test set and unrealistic samples.
We developed an easy generative process for p̂(b) for arbitrary p(b) if the permutation of unobserved components (i1, i2, . . . , i|b|) is chosen randomly and equiprobably: firstly we generate b0 ∼ p(b), u ∼ U [0, 1], then b1 ∼ (Bernoulli(u))D and b = b0 ◦ b1. More complicated generative process exists for a sorted permutation where ij−1 < ij ∀j : 2 ≤ j ≤ |b|. In experiments we use uniform distribution over the permutations.
C GAUSSIAN STOCHASTIC NEURAL NETWORK
Gaussian stochastic neural network (13) and hybrid model (14) are originally proposed in the paper on Conditional VAE (Sohn et al., 2015). The motivation authors mention in the paper is as follows. During training the proposal distribution qφ(z|x, y) is used to generate the latent variables z, while during the testing stage the prior pψ(z|y) is used. KL divergence tries to close the gap between two distributions but, according to authors, it is not enough. To overcome the issue authors propose to use a hybrid model (14), a weighted mixture of variational lower bound (3) and a single-sample Monte-Carlo estimation of log-likelihood (13). The model corresponding to the second term is called Gaussian Stochastic Neural Network (13), because it is a feed-forward neural network with a single Gaussian stochastic layer in the middle. Also GSNN is a special case of CVAE where qφ(z|x, y) = pψ(z|y).
LGSNN (x, y; θ, ψ) = Epψ(z|y) log pθ(x|z, y) (13) L(x, y; θ, ψ, φ) = αLCV AE(x, y; θ, ψ, φ) + (1− α)LGSNN (x, y; θ, ψ), α ∈ [0, 1] (14)
Authors report that hybrid model and GSNN outperform CVAE in terms of segmentation accuracy on the majority of datasets.
We can also add that this technique seems to soften the “holes problem” (Makhzani et al., 2016). In Makhzani et al. (2016) authors observe that vectors z from prior distribution may be different enough from all vectors z from the proposal distribution at the training stage, so the generator network may be confused at the testing stage. Due to this problem CVAE can have good reconstructions of y given z ∼ qφ(z|x, y), while samples of y given z ∼ pψ(z|x) are not realistic.
The same trick is applicable to our model as well:
LGSNN (x, b; θ, ψ) = Epψ(z|x1−b,b) log pθ(xb|z, x1−b, b) (15) L(x, b; θ, ψ, φ) = αLV AEAC(x, b; θ, ψ, φ) + (1− α)LGSNN (x, b; θ, ψ), α ∈ [0, 1] (16)
In order to reflect the difference between sampling z from prior and proposal distributions, authors of CVAE use two methods of log-likelihood estimation:
log pθ,ψ(x|y) ≈ log 1
S S∑ i=1 pθ(x|zi, y), zi ∼ pψ(z|y) (17)
log pθ,ψ(x|y) ≈ log 1
S S∑ i=1 pθ(x|zi, y)pψ(zi|y) qφ(zi|x, y) , zi ∼ qφ(z|x, y) (18)
The first estimator is called Monte-Carlo estimator and the second one is called Importance Sampling estimator (also known as IWAE). They are asymptotically equivalent, but in practice the Monte-Carlo estimator requires much more samples to obtain the same accuracy of estimation. Small S leads to underestimation of the log-likelihood for both Monte-Carlo and Importance Sampling (Burda et al., 2015), but for Monte-Carlo the underestimation is expressed much stronger.
We perform an additional study of GSNN and hybrid model and show that they have drawbacks when the target distribution p(x|y) is has multiple different local maximums.
C.1 THEORETICAL STUDY
In this section we show why GSNN cannot learn distributions with several different modes and leads to a blurry image samples.
For the simplicity of the notation we consider hybrid model for a standard VAE:
L(x;φ, ψ, θ) = αEz∼qφ(z|x) log pθ(x|z)pψ(z) qφ(z|x) + (1− α)Ez∼pψ(z) log pθ(x|z) (19)
The hybrid model (16) for VAEAC can be obtained from (19) by replacing x with xb and conditioning all distributions on x1−b and b. The validity of the further equations and conclusions remains for VAEAC after this replacement.
Consider now a categorical latent variable z which can take one of K values. Let x be a random variable with true distribution pd(x) to be modeled. Consider the following true data distribution: pd(x = xi) = 1K for i ∈ {1, 2, . . . ,K} and some values x1, x2, . . . , xK . So the true distribution has K different equiprobable modes. Suppose the generator network NNθ which models mapping from z to some vector of parameters vz = NNθ(z). Thus, we define generative distribution as some function of these parameters: pθ(x|z) = f(x, vz). Therefore, the parameters θ are just the set of v1, v2, . . . , vK . For the simplicity of the model we assume pψ(z) = 1K . Taking into account pψ(z) = 1 K , we obtain optimal q(z = i|x) = f(x,vi)∑K j=1 f(x,vj) . Using (19) and the above formulas for qφ, pψ and pθ we obtain the following optimization problem:
max v1,v2,...,vK
1
K K∑ i=1 α K∑ j=1 f(xi, vj)∑K k=1 f(xi, vk) log f(xi, vj) 1 K f(xi,vj)∑K k=1 f(xi,vk) + (1− α) K∑ j=1 1 K log f(xi, vj) (20)
It is easy to show that (20) is equivalent to the following optimization problem:
max v1,v2,...,vK K∑ i=1 α log ∑Kj=1 f(xi, vj) K + (1− α) K∑ j=1 1 K log f(xi, vj) (21) It is clear from (21) that when α = 1 the log-likelihood of the initial model is optimized. On the other hand, when α = 0 the optimal point is v1 = v2 = · · · = vK = argmaxv ∑K i=1 log f(xi, v), i. e. z doesn’t influence the generative process, and for each z generator produces the same v which maximizes likelihood estimation of the generative model f(x, v) for the given dataset of x’s. For Bernoulli and Gaussian generative distributions f such v is just average of all modes x1, x2, . . . , xK . That explains why further we observe blurry images when using GSNN model.
The same conclusion holds for for continuous latent variables instead of categorical. Given K different modes in true data distribution, VAE uses proposal network to separate prior distribution intoK components (i. e. regions in the latent space), so that each region corresponds to one mode. On the other hand, in GSNN z is sampled independently on the mode which is to be reconstructed from it, so for each z the generator have to produce parameters suitable for all modes.
From this point of view, there is no difference between VAE and VAEAC. If the true conditional distribution has several different modes, then VAEAC can fit them all, while GSNN learns their average. If true conditional distribution has one mode, GSNN and VAEAC are equal, and GSNN may even learn faster because it has less parameters.
Hybrid model is a trade-off between VAEAC and GSNN: the closer α to zero, the more blurry and closer to the average is the distribution of the model. The exact dependence of the model distribution on α can be derived analytically for the simple data distributions or evaluated experimentally. We perform such experimental evaluation in the next sections.
C.2 SYNTHETIC DATA
In this section we show that VAEAC is capable of learning a complex multimodal distribution of synthetic data while GSNN and hybrid model are not. Let x ∈ R2 and p(b1 = 1) = p(b2 = 1) = 0.5. pd(x) = 1 8 ∑8 i=1N (x|µi, 1 10I) where µi ∼ N (µi|0, I). The distribution p(x) is plotted in figure 6. The dataset contains 100000 points sampled from pd(x). We use multi-layer perceptron with four ReLU layers of size 400-200-100-50, 25-dimensional Gaussian latent variables.
For different mixture coefficients α we visualize samples from the learned distributions pψ,θ(x1, x2), pψ,θ(x1|x2), and pψ,θ(x2|x1). The observed features for the conditional distributions are generated from the marginal distributions p(x2) and p(x1) respectively.
We see in table 8 and in figure 7, that even with very small weight GSNN prevents model from learning distributions with several local optimas. GSNN also increases Monte-Carlo log-likelihood estimation with
1 0 1 x1
1
0
1
x2
0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00
Figure 6: Probability density function of synthetic data distribution.
1 0 1
1
0
1
= 1
x1 unknown: x2 p(x2)
x1 p , (x1|x2)
1 0 1
1
0
1
x2 unknown: x1 p(x1)
x2 p , (x2|x1)
1 0 1
1
0
1
x2
x1, x2 unknown: x1, x2 p , (x1, x2)
1 0 1
1
0
1
= 0.99
1 0 1
1
0
1
1 0 1
1
0
1
x2
1 0 1 x1
1
0
1
= 0.9
1 0 1 x1
1
0
1
1 0 1 x1
1
0
1
x2
Figure 7: VAEAC for synthetic data.
(a) VAEAC (b) GSNN
Figure 8: MNIST inpaintings.
Left: input. The gray pixels are unobserved. Middle: samples from the model. Right: ground truth.
a few samples and decreases much more precise Importance Sampling log-likelihood estimation. When α = 0.9 the whole distribution structure is lost.
We see that using α 6= 1 ruins multimodality of the restored distribution, so we highly recommend to use α = 1 or at least α ≈ 1.
C.3 COMPARISON ON THE IMAGE INPAINTING PROBLEM
In figure 8 we can see that the inpaintings produced by GSNN are smooth, blurry and not diverse compared with VAEAC.
Table 9 shows that VAEAC learns distribution over inpaintings better than GSNN in terms of test loglikelihood. Nevertheless, Monte-Carlo estimations with a small number of samples sometimes are better for GSNN, which means less local modes in the learned distribution and more blurriness in the samples.
D ADDITIONAL EXPERIMENTS
D.1 CONVERGENCE SPEED
In figure 9 one can see that VAEAC has similar convergence speed to VAE in terms of iterations on MNIST dataset. In our experiments we observed the same behaviour for other datasets. Each iteration of VAEAC is about 1.5 times slower than VAE due to usage of three networks instead of two.
D.2 MISSING FEATURES IMPUTATION
We evaluate the quality of imputations on different datasets (mostly from UCI (Lichman, 2013)). The evaluation is performed for VAEAC, GSNN (15) and NN (neural network; can be considered as a special case of GSNN where pθ(z|x1−b, b) is delta-function; produces single imputation). We compare these methods with MICE (Buuren & Groothuis-Oudshoorn, 2010), MissForest (Stekhoven & Bühlmann, 2011), and GAIN (Yoon et al., 2018).
We see that for some datasets MICE and MissForest outperform VAEAC, GSNN and NN. The reason is that for some datasets random forest is more natural structure than neural network.
The results also show that VAEAC, GSNN and NN show similar imputation performance in terms of NRMSE, PFC, post-imputation R2-score and accuracy. Given the result from appendix C we can take this as a weak evidence that the distribution of imputations has only one local maximum for datasets from (Lichman, 2013).
Left: input. The gray pixels are unobserved. Middle: samples from VAEAC. Right: ground truth.
D.3 FACE INPAINTINGS
In figure 10 we provide samples of VAEAC on the CelebA dataset for the masks from (Li et al., 2017).
D.4 GAIN FOR IMAGE INPAINTING
GAIN (Yoon et al., 2018) doesnt use unobserved data during training, which makes it easier to apply to the missing features imputation problem. Nevertheless, it turns into a disadvantage when the fully-observed training data is available but the missingness rate at the testing stage is high.
We consider the horizontal line mask for MNIST which is described in appendix A.3. We use the released GAIN code 5 with a different mask generator. The inpaintings from VAEAC which uses the unobserved pixels during training are available in figure 1. The inpaintings from GAIN which ignores unobserved pixels are provided in figure 11. As can be seen in figure 11, GAIN fails to learn conditional distribution for given mask distribution p(b).
Nevertheless, we don’t claim that GAIN is not suitable for image inpainting. As it was shown in the supplementary of (Yoon et al., 2018) and in the corresponding code, GAIN is able to learn conditional distributions when p(b) is pixel-wise independent Bernoulli distribution with probability 0.5.
5https://github.com/jsyoon0823/GAIN
Left: input. The gray pixels are unobserved. Middle: samples from the model. Right: ground truth.
Left: input. The gray pixels are unobserved. Middle: samples from the model. Right: ground truth.
D.5 UNIVERSAL MARGINALIZER: ILLUSTRATIONS
In figure 12 we provide samples of Universal Marginalizer (UM) and VAEAC for the same inputs.
Consider the case when UM marginal distributions are parametrized with Gaussians. The most simple example of a distribution, which UM cannot learn but VAEAC can, is given in figure 13. | 1. What is the focus of the paper, and what are its contributions to the field?
2. How does the proposed model differ from previous approaches in handling missing features?
3. Can you provide more information about the experimental results and how they support the claims made in the paper?
4. How does the reviewer assess the technical soundness and clarity of the paper's content?
5. Are there any minor issues or suggestions for improvement regarding the paper's presentation or equations? | Review | Review
This paper introduces the VAEAC model, inspired by CVAEs, it allows conditioning on any subset
of the latent features. This provides a model able to achieve good results on image inpainting
and feature imputation tasks.
The paper appears to be technically sound, and the experiments are
thoughtfully designed. The writing is clear and the model is easy to
understand. The closest work to this of the Universal Marginalizer is
compared to well, with more compelling examples in the appendix. I
would have preferred if more of the experimental results were in the
main paper instead of in the appendix especially as the authors state
they chose to highlight their better results in the main paper.
While not the first model to try to handle modeling data with missing features, it is
still a fairly original and elegant formulation.
Minor details:
In equation (8) should x be x_b? |